Jun 20 19:30:30.193183 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jun 20 19:30:30.193207 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jun 20 16:58:52 -00 2025 Jun 20 19:30:30.193215 kernel: KASLR enabled Jun 20 19:30:30.193221 kernel: efi: EFI v2.7 by American Megatrends Jun 20 19:30:30.193226 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf10018 MEMRESERVE=0xe47c2f98 Jun 20 19:30:30.193231 kernel: random: crng init done Jun 20 19:30:30.193238 kernel: secureboot: Secure boot disabled Jun 20 19:30:30.193244 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. Jun 20 19:30:30.193251 kernel: ACPI: Early table checksum verification disabled Jun 20 19:30:30.193256 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Jun 20 19:30:30.193262 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jun 20 19:30:30.193268 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jun 20 19:30:30.193273 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Jun 20 19:30:30.193279 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jun 20 19:30:30.193288 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jun 20 19:30:30.193294 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jun 20 19:30:30.193300 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jun 20 19:30:30.193306 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jun 20 19:30:30.193312 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jun 20 19:30:30.193318 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jun 20 19:30:30.193324 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jun 20 19:30:30.193330 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jun 20 19:30:30.193336 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jun 20 19:30:30.193342 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jun 20 19:30:30.193349 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jun 20 19:30:30.193355 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jun 20 19:30:30.193361 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jun 20 19:30:30.193367 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jun 20 19:30:30.193373 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jun 20 19:30:30.193379 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 20 19:30:30.193385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jun 20 19:30:30.193391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jun 20 19:30:30.193397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jun 20 19:30:30.193403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jun 20 19:30:30.193409 kernel: NUMA: Initialized distance table, cnt=1 Jun 20 19:30:30.193417 kernel: NUMA: Node 0 [mem 0x88300000-0x883fffff] + [mem 0x90000000-0xffffffff] -> [mem 0x88300000-0xffffffff] Jun 20 19:30:30.193423 kernel: NUMA: Node 0 [mem 0x88300000-0xffffffff] + [mem 0x80000000000-0x8007fffffff] -> [mem 0x88300000-0x8007fffffff] Jun 20 19:30:30.193429 kernel: NUMA: Node 0 [mem 0x88300000-0x8007fffffff] + [mem 0x80100000000-0x83fffffffff] -> [mem 0x88300000-0x83fffffffff] Jun 20 19:30:30.193435 kernel: NODE_DATA(0) allocated [mem 0x83fdffd8dc0-0x83fdffdffff] Jun 20 19:30:30.193441 kernel: Zone ranges: Jun 20 19:30:30.193450 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jun 20 19:30:30.193457 kernel: DMA32 empty Jun 20 19:30:30.193464 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jun 20 19:30:30.193470 kernel: Device empty Jun 20 19:30:30.193476 kernel: Movable zone start for each node Jun 20 19:30:30.193483 kernel: Early memory node ranges Jun 20 19:30:30.193489 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jun 20 19:30:30.193495 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jun 20 19:30:30.193501 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jun 20 19:30:30.193508 kernel: node 0: [mem 0x0000000094000000-0x00000000eba37fff] Jun 20 19:30:30.193514 kernel: node 0: [mem 0x00000000eba38000-0x00000000ebeccfff] Jun 20 19:30:30.193522 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Jun 20 19:30:30.193528 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Jun 20 19:30:30.193534 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Jun 20 19:30:30.193540 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Jun 20 19:30:30.193547 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Jun 20 19:30:30.193553 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Jun 20 19:30:30.193559 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Jun 20 19:30:30.193566 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jun 20 19:30:30.193572 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jun 20 19:30:30.193578 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jun 20 19:30:30.193584 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jun 20 19:30:30.193591 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jun 20 19:30:30.193598 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jun 20 19:30:30.193605 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jun 20 19:30:30.193611 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jun 20 19:30:30.193617 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jun 20 19:30:30.193624 kernel: psci: probing for conduit method from ACPI. Jun 20 19:30:30.193630 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 19:30:30.193636 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 19:30:30.193642 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 20 19:30:30.193649 kernel: psci: SMC Calling Convention v1.2 Jun 20 19:30:30.193655 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 20 19:30:30.193661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jun 20 19:30:30.193668 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jun 20 19:30:30.193675 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jun 20 19:30:30.193682 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jun 20 19:30:30.193688 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jun 20 19:30:30.193694 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jun 20 19:30:30.193701 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jun 20 19:30:30.193707 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jun 20 19:30:30.193713 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jun 20 19:30:30.193720 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jun 20 19:30:30.193726 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jun 20 19:30:30.193732 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jun 20 19:30:30.193739 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jun 20 19:30:30.193746 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jun 20 19:30:30.193752 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jun 20 19:30:30.193759 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jun 20 19:30:30.193765 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jun 20 19:30:30.193771 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jun 20 19:30:30.193778 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jun 20 19:30:30.193784 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jun 20 19:30:30.193790 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jun 20 19:30:30.193797 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jun 20 19:30:30.193803 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jun 20 19:30:30.193809 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jun 20 19:30:30.193815 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jun 20 19:30:30.193823 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jun 20 19:30:30.193829 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jun 20 19:30:30.193836 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jun 20 19:30:30.193842 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jun 20 19:30:30.193852 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jun 20 19:30:30.193859 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jun 20 19:30:30.193865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jun 20 19:30:30.193872 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jun 20 19:30:30.193878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jun 20 19:30:30.193884 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jun 20 19:30:30.193891 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jun 20 19:30:30.193898 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jun 20 19:30:30.193905 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jun 20 19:30:30.193911 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jun 20 19:30:30.193917 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jun 20 19:30:30.193923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jun 20 19:30:30.193930 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jun 20 19:30:30.193936 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jun 20 19:30:30.193942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jun 20 19:30:30.193949 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jun 20 19:30:30.193961 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jun 20 19:30:30.193969 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jun 20 19:30:30.193975 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jun 20 19:30:30.193982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jun 20 19:30:30.193989 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jun 20 19:30:30.193996 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jun 20 19:30:30.194003 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jun 20 19:30:30.194011 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jun 20 19:30:30.194017 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jun 20 19:30:30.194024 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jun 20 19:30:30.194031 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jun 20 19:30:30.194037 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jun 20 19:30:30.194044 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jun 20 19:30:30.194051 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jun 20 19:30:30.194057 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jun 20 19:30:30.194064 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jun 20 19:30:30.194071 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jun 20 19:30:30.194077 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jun 20 19:30:30.194084 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jun 20 19:30:30.194092 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jun 20 19:30:30.194099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jun 20 19:30:30.194106 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jun 20 19:30:30.194112 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jun 20 19:30:30.194119 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jun 20 19:30:30.194126 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jun 20 19:30:30.194133 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jun 20 19:30:30.194139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jun 20 19:30:30.194146 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jun 20 19:30:30.194153 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jun 20 19:30:30.194159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jun 20 19:30:30.194166 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jun 20 19:30:30.194174 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jun 20 19:30:30.194181 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jun 20 19:30:30.194187 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jun 20 19:30:30.194194 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 20 19:30:30.194201 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 20 19:30:30.194207 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jun 20 19:30:30.194214 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jun 20 19:30:30.194221 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jun 20 19:30:30.194227 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jun 20 19:30:30.194234 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jun 20 19:30:30.194241 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jun 20 19:30:30.194249 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jun 20 19:30:30.194256 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jun 20 19:30:30.194263 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jun 20 19:30:30.194269 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jun 20 19:30:30.194276 kernel: Detected PIPT I-cache on CPU0 Jun 20 19:30:30.194283 kernel: CPU features: detected: GIC system register CPU interface Jun 20 19:30:30.194289 kernel: CPU features: detected: Virtualization Host Extensions Jun 20 19:30:30.194296 kernel: CPU features: detected: Spectre-v4 Jun 20 19:30:30.194303 kernel: CPU features: detected: Spectre-BHB Jun 20 19:30:30.194310 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 19:30:30.194316 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 19:30:30.194324 kernel: CPU features: detected: ARM erratum 1418040 Jun 20 19:30:30.194331 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 19:30:30.194338 kernel: alternatives: applying boot alternatives Jun 20 19:30:30.194346 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 19:30:30.194353 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:30:30.194360 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 20 19:30:30.194367 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jun 20 19:30:30.194373 kernel: printk: log_buf_len min size: 262144 bytes Jun 20 19:30:30.194380 kernel: printk: log_buf_len: 1048576 bytes Jun 20 19:30:30.194387 kernel: printk: early log buf free: 249568(95%) Jun 20 19:30:30.194394 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jun 20 19:30:30.194402 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jun 20 19:30:30.194409 kernel: Fallback order for Node 0: 0 Jun 20 19:30:30.194416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 67043584 Jun 20 19:30:30.194422 kernel: Policy zone: Normal Jun 20 19:30:30.194429 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:30:30.194436 kernel: software IO TLB: area num 128. Jun 20 19:30:30.194442 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Jun 20 19:30:30.194449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jun 20 19:30:30.194456 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:30:30.194464 kernel: rcu: RCU event tracing is enabled. Jun 20 19:30:30.194471 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jun 20 19:30:30.194479 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:30:30.194486 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:30:30.194493 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:30:30.194500 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jun 20 19:30:30.194507 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jun 20 19:30:30.194513 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jun 20 19:30:30.194520 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 19:30:30.194527 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jun 20 19:30:30.194534 kernel: GICv3: 672 SPIs implemented Jun 20 19:30:30.194540 kernel: GICv3: 0 Extended SPIs implemented Jun 20 19:30:30.194547 kernel: Root IRQ handler: gic_handle_irq Jun 20 19:30:30.194554 kernel: GICv3: GICv3 features: 16 PPIs Jun 20 19:30:30.194562 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=1 Jun 20 19:30:30.194568 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jun 20 19:30:30.194575 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jun 20 19:30:30.194582 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jun 20 19:30:30.194588 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jun 20 19:30:30.194595 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jun 20 19:30:30.194601 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jun 20 19:30:30.194608 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jun 20 19:30:30.194615 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jun 20 19:30:30.194621 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jun 20 19:30:30.194628 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jun 20 19:30:30.194635 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000310000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194643 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000320000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194650 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jun 20 19:30:30.194657 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000340000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194664 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000350000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194670 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jun 20 19:30:30.194677 kernel: ITS@0x0000100100080000: allocated 8192 Devices @80000370000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194684 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @80000380000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194691 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jun 20 19:30:30.194697 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @800003a0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194704 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800003b0000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194711 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jun 20 19:30:30.194719 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @800003d0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194726 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @800003e0000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194733 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jun 20 19:30:30.194739 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000800000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194746 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000810000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194753 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jun 20 19:30:30.194760 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000830000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194767 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000840000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194773 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jun 20 19:30:30.194780 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000860000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:30:30.194787 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @80000870000 (flat, esz 2, psz 64K, shr 1) Jun 20 19:30:30.194795 kernel: GICv3: using LPI property table @0x0000080000880000 Jun 20 19:30:30.194802 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000080000890000 Jun 20 19:30:30.194809 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:30:30.194816 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.194823 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jun 20 19:30:30.194829 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jun 20 19:30:30.194836 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 20 19:30:30.194843 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 20 19:30:30.194852 kernel: Console: colour dummy device 80x25 Jun 20 19:30:30.194859 kernel: printk: legacy console [tty0] enabled Jun 20 19:30:30.194866 kernel: ACPI: Core revision 20240827 Jun 20 19:30:30.194874 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 20 19:30:30.194881 kernel: pid_max: default: 81920 minimum: 640 Jun 20 19:30:30.194888 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:30:30.194895 kernel: landlock: Up and running. Jun 20 19:30:30.194902 kernel: SELinux: Initializing. Jun 20 19:30:30.194909 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.194916 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.194923 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:30:30.194930 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:30:30.194938 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jun 20 19:30:30.194945 kernel: Remapping and enabling EFI services. Jun 20 19:30:30.194952 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:30:30.194959 kernel: Detected PIPT I-cache on CPU1 Jun 20 19:30:30.194966 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jun 20 19:30:30.194973 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000800008a0000 Jun 20 19:30:30.194980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.194987 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jun 20 19:30:30.194993 kernel: Detected PIPT I-cache on CPU2 Jun 20 19:30:30.195001 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jun 20 19:30:30.195008 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800008b0000 Jun 20 19:30:30.195015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195022 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jun 20 19:30:30.195029 kernel: Detected PIPT I-cache on CPU3 Jun 20 19:30:30.195036 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jun 20 19:30:30.195043 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800008c0000 Jun 20 19:30:30.195050 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195056 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jun 20 19:30:30.195063 kernel: Detected PIPT I-cache on CPU4 Jun 20 19:30:30.195114 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jun 20 19:30:30.195120 kernel: GICv3: CPU4: using allocated LPI pending table @0x00000800008d0000 Jun 20 19:30:30.195127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195134 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jun 20 19:30:30.195141 kernel: Detected PIPT I-cache on CPU5 Jun 20 19:30:30.195148 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jun 20 19:30:30.195154 kernel: GICv3: CPU5: using allocated LPI pending table @0x00000800008e0000 Jun 20 19:30:30.195161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195168 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jun 20 19:30:30.195176 kernel: Detected PIPT I-cache on CPU6 Jun 20 19:30:30.195183 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jun 20 19:30:30.195190 kernel: GICv3: CPU6: using allocated LPI pending table @0x00000800008f0000 Jun 20 19:30:30.195197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195204 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jun 20 19:30:30.195210 kernel: Detected PIPT I-cache on CPU7 Jun 20 19:30:30.195217 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jun 20 19:30:30.195224 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000900000 Jun 20 19:30:30.195231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195238 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jun 20 19:30:30.195246 kernel: Detected PIPT I-cache on CPU8 Jun 20 19:30:30.195253 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jun 20 19:30:30.195260 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000910000 Jun 20 19:30:30.195267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195273 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jun 20 19:30:30.195280 kernel: Detected PIPT I-cache on CPU9 Jun 20 19:30:30.195287 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jun 20 19:30:30.195294 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000920000 Jun 20 19:30:30.195301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195307 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jun 20 19:30:30.195316 kernel: Detected PIPT I-cache on CPU10 Jun 20 19:30:30.195322 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jun 20 19:30:30.195329 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000930000 Jun 20 19:30:30.195336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195343 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jun 20 19:30:30.195350 kernel: Detected PIPT I-cache on CPU11 Jun 20 19:30:30.195357 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jun 20 19:30:30.195364 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000940000 Jun 20 19:30:30.195371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195379 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jun 20 19:30:30.195386 kernel: Detected PIPT I-cache on CPU12 Jun 20 19:30:30.195393 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jun 20 19:30:30.195399 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000950000 Jun 20 19:30:30.195406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195413 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jun 20 19:30:30.195420 kernel: Detected PIPT I-cache on CPU13 Jun 20 19:30:30.195427 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jun 20 19:30:30.195433 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000960000 Jun 20 19:30:30.195440 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195449 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jun 20 19:30:30.195455 kernel: Detected PIPT I-cache on CPU14 Jun 20 19:30:30.195462 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jun 20 19:30:30.195469 kernel: GICv3: CPU14: using allocated LPI pending table @0x0000080000970000 Jun 20 19:30:30.195476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195483 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jun 20 19:30:30.195489 kernel: Detected PIPT I-cache on CPU15 Jun 20 19:30:30.195496 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jun 20 19:30:30.195503 kernel: GICv3: CPU15: using allocated LPI pending table @0x0000080000980000 Jun 20 19:30:30.195511 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195518 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jun 20 19:30:30.195525 kernel: Detected PIPT I-cache on CPU16 Jun 20 19:30:30.195532 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jun 20 19:30:30.195539 kernel: GICv3: CPU16: using allocated LPI pending table @0x0000080000990000 Jun 20 19:30:30.195546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195553 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jun 20 19:30:30.195559 kernel: Detected PIPT I-cache on CPU17 Jun 20 19:30:30.195566 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jun 20 19:30:30.195574 kernel: GICv3: CPU17: using allocated LPI pending table @0x00000800009a0000 Jun 20 19:30:30.195581 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195588 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jun 20 19:30:30.195595 kernel: Detected PIPT I-cache on CPU18 Jun 20 19:30:30.195601 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jun 20 19:30:30.195608 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800009b0000 Jun 20 19:30:30.195615 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195622 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jun 20 19:30:30.195638 kernel: Detected PIPT I-cache on CPU19 Jun 20 19:30:30.195647 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jun 20 19:30:30.195655 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800009c0000 Jun 20 19:30:30.195662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195670 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jun 20 19:30:30.195677 kernel: Detected PIPT I-cache on CPU20 Jun 20 19:30:30.195684 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jun 20 19:30:30.195692 kernel: GICv3: CPU20: using allocated LPI pending table @0x00000800009d0000 Jun 20 19:30:30.195699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195706 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jun 20 19:30:30.195713 kernel: Detected PIPT I-cache on CPU21 Jun 20 19:30:30.195722 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jun 20 19:30:30.195729 kernel: GICv3: CPU21: using allocated LPI pending table @0x00000800009e0000 Jun 20 19:30:30.195736 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195743 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jun 20 19:30:30.195750 kernel: Detected PIPT I-cache on CPU22 Jun 20 19:30:30.195758 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jun 20 19:30:30.195767 kernel: GICv3: CPU22: using allocated LPI pending table @0x00000800009f0000 Jun 20 19:30:30.195775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195782 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jun 20 19:30:30.195789 kernel: Detected PIPT I-cache on CPU23 Jun 20 19:30:30.195796 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jun 20 19:30:30.195804 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000a00000 Jun 20 19:30:30.195811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195818 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jun 20 19:30:30.195825 kernel: Detected PIPT I-cache on CPU24 Jun 20 19:30:30.195832 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jun 20 19:30:30.195841 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000a10000 Jun 20 19:30:30.195850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195858 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jun 20 19:30:30.195865 kernel: Detected PIPT I-cache on CPU25 Jun 20 19:30:30.195872 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jun 20 19:30:30.195879 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000a20000 Jun 20 19:30:30.195887 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195894 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jun 20 19:30:30.195901 kernel: Detected PIPT I-cache on CPU26 Jun 20 19:30:30.195910 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jun 20 19:30:30.195917 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000a30000 Jun 20 19:30:30.195924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195931 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jun 20 19:30:30.195938 kernel: Detected PIPT I-cache on CPU27 Jun 20 19:30:30.195946 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jun 20 19:30:30.195953 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000a40000 Jun 20 19:30:30.195960 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.195967 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jun 20 19:30:30.195976 kernel: Detected PIPT I-cache on CPU28 Jun 20 19:30:30.195983 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jun 20 19:30:30.195990 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000a50000 Jun 20 19:30:30.195998 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196005 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jun 20 19:30:30.196012 kernel: Detected PIPT I-cache on CPU29 Jun 20 19:30:30.196019 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jun 20 19:30:30.196026 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000a60000 Jun 20 19:30:30.196033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196040 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jun 20 19:30:30.196049 kernel: Detected PIPT I-cache on CPU30 Jun 20 19:30:30.196056 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jun 20 19:30:30.196063 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000a70000 Jun 20 19:30:30.196071 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196078 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jun 20 19:30:30.196085 kernel: Detected PIPT I-cache on CPU31 Jun 20 19:30:30.196092 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jun 20 19:30:30.196099 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000a80000 Jun 20 19:30:30.196107 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196115 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jun 20 19:30:30.196122 kernel: Detected PIPT I-cache on CPU32 Jun 20 19:30:30.196129 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jun 20 19:30:30.196136 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000a90000 Jun 20 19:30:30.196144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196151 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jun 20 19:30:30.196158 kernel: Detected PIPT I-cache on CPU33 Jun 20 19:30:30.196165 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jun 20 19:30:30.196172 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000aa0000 Jun 20 19:30:30.196179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196188 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jun 20 19:30:30.196195 kernel: Detected PIPT I-cache on CPU34 Jun 20 19:30:30.196202 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jun 20 19:30:30.196209 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000ab0000 Jun 20 19:30:30.196216 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196223 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jun 20 19:30:30.196231 kernel: Detected PIPT I-cache on CPU35 Jun 20 19:30:30.196238 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jun 20 19:30:30.196245 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000ac0000 Jun 20 19:30:30.196253 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196261 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jun 20 19:30:30.196268 kernel: Detected PIPT I-cache on CPU36 Jun 20 19:30:30.196275 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jun 20 19:30:30.196282 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000ad0000 Jun 20 19:30:30.196289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196296 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jun 20 19:30:30.196304 kernel: Detected PIPT I-cache on CPU37 Jun 20 19:30:30.196311 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jun 20 19:30:30.196318 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000ae0000 Jun 20 19:30:30.196327 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196334 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jun 20 19:30:30.196341 kernel: Detected PIPT I-cache on CPU38 Jun 20 19:30:30.196348 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jun 20 19:30:30.196355 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000af0000 Jun 20 19:30:30.196363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196370 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jun 20 19:30:30.196377 kernel: Detected PIPT I-cache on CPU39 Jun 20 19:30:30.196384 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jun 20 19:30:30.196393 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000b00000 Jun 20 19:30:30.196400 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196407 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jun 20 19:30:30.196415 kernel: Detected PIPT I-cache on CPU40 Jun 20 19:30:30.196422 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jun 20 19:30:30.196429 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000b10000 Jun 20 19:30:30.196436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196444 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jun 20 19:30:30.196452 kernel: Detected PIPT I-cache on CPU41 Jun 20 19:30:30.196459 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jun 20 19:30:30.196466 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000b20000 Jun 20 19:30:30.196474 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196481 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jun 20 19:30:30.196488 kernel: Detected PIPT I-cache on CPU42 Jun 20 19:30:30.196495 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jun 20 19:30:30.196502 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000b30000 Jun 20 19:30:30.196510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196517 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jun 20 19:30:30.196525 kernel: Detected PIPT I-cache on CPU43 Jun 20 19:30:30.196532 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jun 20 19:30:30.196539 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000b40000 Jun 20 19:30:30.196547 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196554 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jun 20 19:30:30.196561 kernel: Detected PIPT I-cache on CPU44 Jun 20 19:30:30.196568 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jun 20 19:30:30.196576 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000b50000 Jun 20 19:30:30.196584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196592 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jun 20 19:30:30.196601 kernel: Detected PIPT I-cache on CPU45 Jun 20 19:30:30.196608 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jun 20 19:30:30.196615 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000b60000 Jun 20 19:30:30.196622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196629 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jun 20 19:30:30.196637 kernel: Detected PIPT I-cache on CPU46 Jun 20 19:30:30.196644 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jun 20 19:30:30.196651 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000b70000 Jun 20 19:30:30.196660 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196667 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jun 20 19:30:30.196674 kernel: Detected PIPT I-cache on CPU47 Jun 20 19:30:30.196682 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jun 20 19:30:30.196689 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000b80000 Jun 20 19:30:30.196696 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196703 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jun 20 19:30:30.196710 kernel: Detected PIPT I-cache on CPU48 Jun 20 19:30:30.196717 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jun 20 19:30:30.196725 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000b90000 Jun 20 19:30:30.196734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196741 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jun 20 19:30:30.196748 kernel: Detected PIPT I-cache on CPU49 Jun 20 19:30:30.196755 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jun 20 19:30:30.196762 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000ba0000 Jun 20 19:30:30.196770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196777 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jun 20 19:30:30.196784 kernel: Detected PIPT I-cache on CPU50 Jun 20 19:30:30.196791 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jun 20 19:30:30.196800 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000bb0000 Jun 20 19:30:30.196807 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196814 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jun 20 19:30:30.196821 kernel: Detected PIPT I-cache on CPU51 Jun 20 19:30:30.196828 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jun 20 19:30:30.196835 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000bc0000 Jun 20 19:30:30.196843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196853 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jun 20 19:30:30.196860 kernel: Detected PIPT I-cache on CPU52 Jun 20 19:30:30.196868 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jun 20 19:30:30.196877 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000bd0000 Jun 20 19:30:30.196885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196892 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jun 20 19:30:30.196899 kernel: Detected PIPT I-cache on CPU53 Jun 20 19:30:30.196906 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jun 20 19:30:30.196914 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000be0000 Jun 20 19:30:30.196921 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196928 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jun 20 19:30:30.196935 kernel: Detected PIPT I-cache on CPU54 Jun 20 19:30:30.196944 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jun 20 19:30:30.196951 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000bf0000 Jun 20 19:30:30.196958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.196965 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jun 20 19:30:30.196973 kernel: Detected PIPT I-cache on CPU55 Jun 20 19:30:30.196980 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jun 20 19:30:30.196987 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000c00000 Jun 20 19:30:30.196994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197001 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jun 20 19:30:30.197009 kernel: Detected PIPT I-cache on CPU56 Jun 20 19:30:30.197017 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jun 20 19:30:30.197024 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000c10000 Jun 20 19:30:30.197032 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197039 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jun 20 19:30:30.197046 kernel: Detected PIPT I-cache on CPU57 Jun 20 19:30:30.197053 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jun 20 19:30:30.197060 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000c20000 Jun 20 19:30:30.197068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197075 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jun 20 19:30:30.197083 kernel: Detected PIPT I-cache on CPU58 Jun 20 19:30:30.197090 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jun 20 19:30:30.197098 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000c30000 Jun 20 19:30:30.197105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197112 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jun 20 19:30:30.197119 kernel: Detected PIPT I-cache on CPU59 Jun 20 19:30:30.197126 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jun 20 19:30:30.197134 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000c40000 Jun 20 19:30:30.197141 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197148 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jun 20 19:30:30.197156 kernel: Detected PIPT I-cache on CPU60 Jun 20 19:30:30.197163 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jun 20 19:30:30.197170 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000c50000 Jun 20 19:30:30.197178 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197185 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jun 20 19:30:30.197192 kernel: Detected PIPT I-cache on CPU61 Jun 20 19:30:30.197199 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jun 20 19:30:30.197206 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000c60000 Jun 20 19:30:30.197214 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197222 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jun 20 19:30:30.197229 kernel: Detected PIPT I-cache on CPU62 Jun 20 19:30:30.197236 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jun 20 19:30:30.197243 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000c70000 Jun 20 19:30:30.197251 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197258 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jun 20 19:30:30.197265 kernel: Detected PIPT I-cache on CPU63 Jun 20 19:30:30.197272 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jun 20 19:30:30.197279 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000c80000 Jun 20 19:30:30.197288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197295 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jun 20 19:30:30.197302 kernel: Detected PIPT I-cache on CPU64 Jun 20 19:30:30.197309 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jun 20 19:30:30.197317 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000c90000 Jun 20 19:30:30.197324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197331 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jun 20 19:30:30.197338 kernel: Detected PIPT I-cache on CPU65 Jun 20 19:30:30.197345 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jun 20 19:30:30.197353 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000ca0000 Jun 20 19:30:30.197361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197368 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jun 20 19:30:30.197375 kernel: Detected PIPT I-cache on CPU66 Jun 20 19:30:30.197382 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jun 20 19:30:30.197390 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000cb0000 Jun 20 19:30:30.197397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197404 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jun 20 19:30:30.197411 kernel: Detected PIPT I-cache on CPU67 Jun 20 19:30:30.197418 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jun 20 19:30:30.197427 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000cc0000 Jun 20 19:30:30.197434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197441 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jun 20 19:30:30.197448 kernel: Detected PIPT I-cache on CPU68 Jun 20 19:30:30.197455 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jun 20 19:30:30.197463 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000cd0000 Jun 20 19:30:30.197470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197477 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jun 20 19:30:30.197484 kernel: Detected PIPT I-cache on CPU69 Jun 20 19:30:30.197492 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jun 20 19:30:30.197500 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000ce0000 Jun 20 19:30:30.197508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197515 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jun 20 19:30:30.197522 kernel: Detected PIPT I-cache on CPU70 Jun 20 19:30:30.197529 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jun 20 19:30:30.197537 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000cf0000 Jun 20 19:30:30.197544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197551 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jun 20 19:30:30.197559 kernel: Detected PIPT I-cache on CPU71 Jun 20 19:30:30.197567 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jun 20 19:30:30.197575 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000d00000 Jun 20 19:30:30.197582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197589 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jun 20 19:30:30.197596 kernel: Detected PIPT I-cache on CPU72 Jun 20 19:30:30.197603 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jun 20 19:30:30.197611 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000d10000 Jun 20 19:30:30.197618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197625 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jun 20 19:30:30.197632 kernel: Detected PIPT I-cache on CPU73 Jun 20 19:30:30.197640 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jun 20 19:30:30.197648 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000d20000 Jun 20 19:30:30.197655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197662 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jun 20 19:30:30.197669 kernel: Detected PIPT I-cache on CPU74 Jun 20 19:30:30.197677 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jun 20 19:30:30.197684 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000d30000 Jun 20 19:30:30.197691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197698 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jun 20 19:30:30.197706 kernel: Detected PIPT I-cache on CPU75 Jun 20 19:30:30.197714 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jun 20 19:30:30.197721 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000d40000 Jun 20 19:30:30.197728 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197735 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jun 20 19:30:30.197742 kernel: Detected PIPT I-cache on CPU76 Jun 20 19:30:30.197749 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jun 20 19:30:30.197757 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000d50000 Jun 20 19:30:30.197764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197771 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jun 20 19:30:30.197780 kernel: Detected PIPT I-cache on CPU77 Jun 20 19:30:30.197787 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jun 20 19:30:30.197794 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000d60000 Jun 20 19:30:30.197801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197808 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jun 20 19:30:30.197816 kernel: Detected PIPT I-cache on CPU78 Jun 20 19:30:30.197823 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jun 20 19:30:30.197830 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000d70000 Jun 20 19:30:30.197837 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197845 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jun 20 19:30:30.197888 kernel: Detected PIPT I-cache on CPU79 Jun 20 19:30:30.197895 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jun 20 19:30:30.197902 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000d80000 Jun 20 19:30:30.197910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:30:30.197917 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jun 20 19:30:30.197924 kernel: smp: Brought up 1 node, 80 CPUs Jun 20 19:30:30.197931 kernel: SMP: Total of 80 processors activated. Jun 20 19:30:30.197939 kernel: CPU: All CPU(s) started at EL2 Jun 20 19:30:30.197947 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 19:30:30.197955 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 19:30:30.197962 kernel: CPU features: detected: Common not Private translations Jun 20 19:30:30.197969 kernel: CPU features: detected: CRC32 instructions Jun 20 19:30:30.197976 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 20 19:30:30.197984 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 19:30:30.197991 kernel: CPU features: detected: LSE atomic instructions Jun 20 19:30:30.197998 kernel: CPU features: detected: Privileged Access Never Jun 20 19:30:30.198005 kernel: CPU features: detected: RAS Extension Support Jun 20 19:30:30.198014 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 20 19:30:30.198021 kernel: alternatives: applying system-wide alternatives Jun 20 19:30:30.198028 kernel: CPU features: detected: Hardware dirty bit management on CPU0-79 Jun 20 19:30:30.198036 kernel: Memory: 262860252K/268174336K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 5254404K reserved, 0K cma-reserved) Jun 20 19:30:30.198043 kernel: devtmpfs: initialized Jun 20 19:30:30.198051 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:30:30.198058 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.198066 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 19:30:30.198073 kernel: 0 pages in range for non-PLT usage Jun 20 19:30:30.198082 kernel: 508544 pages in range for PLT usage Jun 20 19:30:30.198089 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:30:30.198096 kernel: SMBIOS 3.4.0 present. Jun 20 19:30:30.198103 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Jun 20 19:30:30.198111 kernel: DMI: Memory slots populated: 8/16 Jun 20 19:30:30.198118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:30:30.198125 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jun 20 19:30:30.198132 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 19:30:30.198140 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 19:30:30.198148 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:30:30.198155 kernel: audit: type=2000 audit(0.064:1): state=initialized audit_enabled=0 res=1 Jun 20 19:30:30.198163 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:30:30.198170 kernel: cpuidle: using governor menu Jun 20 19:30:30.198177 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 19:30:30.198184 kernel: ASID allocator initialised with 32768 entries Jun 20 19:30:30.198191 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:30:30.198199 kernel: Serial: AMBA PL011 UART driver Jun 20 19:30:30.198206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:30:30.198214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:30:30.198222 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 19:30:30.198229 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 19:30:30.198236 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:30:30.198243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:30:30.198250 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 19:30:30.198258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 19:30:30.198265 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:30:30.198272 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:30:30.198280 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:30:30.198287 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jun 20 19:30:30.198295 kernel: ACPI: Interpreter enabled Jun 20 19:30:30.198302 kernel: ACPI: Using GIC for interrupt routing Jun 20 19:30:30.198309 kernel: ACPI: MCFG table detected, 8 entries Jun 20 19:30:30.198316 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198323 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198330 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198338 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198346 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198353 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198360 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198368 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jun 20 19:30:30.198375 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Jun 20 19:30:30.198382 kernel: printk: legacy console [ttyAMA0] enabled Jun 20 19:30:30.198390 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Jun 20 19:30:30.198397 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jun 20 19:30:30.198526 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.198592 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.198649 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.198705 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.198760 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.198814 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jun 20 19:30:30.198824 kernel: PCI host bridge to bus 000d:00 Jun 20 19:30:30.198894 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jun 20 19:30:30.198949 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jun 20 19:30:30.199000 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jun 20 19:30:30.199073 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.199141 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.199201 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.199263 kernel: pci 000d:00:01.0: enabling Extended Tags Jun 20 19:30:30.199322 kernel: pci 000d:00:01.0: supports D1 D2 Jun 20 19:30:30.199383 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.199450 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.199508 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.199565 kernel: pci 000d:00:02.0: supports D1 D2 Jun 20 19:30:30.199622 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.199687 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.199747 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.199804 kernel: pci 000d:00:03.0: supports D1 D2 Jun 20 19:30:30.199864 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.199929 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.199987 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.200044 kernel: pci 000d:00:04.0: supports D1 D2 Jun 20 19:30:30.200103 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.200112 kernel: acpiphp: Slot [1] registered Jun 20 19:30:30.200120 kernel: acpiphp: Slot [2] registered Jun 20 19:30:30.200127 kernel: acpiphp: Slot [3] registered Jun 20 19:30:30.200134 kernel: acpiphp: Slot [4] registered Jun 20 19:30:30.200185 kernel: pci_bus 000d:00: on NUMA node 0 Jun 20 19:30:30.200244 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.200302 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.200361 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.200419 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.200475 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.200532 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.200590 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.200648 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.200705 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.200764 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.200821 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.200881 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.200938 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff]: assigned Jun 20 19:30:30.200996 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref]: assigned Jun 20 19:30:30.201052 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff]: assigned Jun 20 19:30:30.201110 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref]: assigned Jun 20 19:30:30.201168 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff]: assigned Jun 20 19:30:30.201226 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref]: assigned Jun 20 19:30:30.201282 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff]: assigned Jun 20 19:30:30.201339 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref]: assigned Jun 20 19:30:30.201396 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.201453 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.201510 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.201570 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.201627 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.201684 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.201741 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.201797 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.201857 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.201914 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.201971 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.202030 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.202086 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.202143 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.202200 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.202257 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.202314 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.202371 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jun 20 19:30:30.202428 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jun 20 19:30:30.202487 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.202546 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jun 20 19:30:30.202603 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jun 20 19:30:30.202660 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.202717 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jun 20 19:30:30.202774 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jun 20 19:30:30.202831 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.202893 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jun 20 19:30:30.202950 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jun 20 19:30:30.203001 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jun 20 19:30:30.203052 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jun 20 19:30:30.203115 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jun 20 19:30:30.203168 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jun 20 19:30:30.203230 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jun 20 19:30:30.203283 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jun 20 19:30:30.203352 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jun 20 19:30:30.203405 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jun 20 19:30:30.203464 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jun 20 19:30:30.203516 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jun 20 19:30:30.203528 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jun 20 19:30:30.203591 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.203648 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.203707 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.203765 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.203824 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.203883 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jun 20 19:30:30.203895 kernel: PCI host bridge to bus 0000:00 Jun 20 19:30:30.203954 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jun 20 19:30:30.204005 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jun 20 19:30:30.204056 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:30:30.204120 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.204186 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.204248 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.204311 kernel: pci 0000:00:01.0: enabling Extended Tags Jun 20 19:30:30.204374 kernel: pci 0000:00:01.0: supports D1 D2 Jun 20 19:30:30.204432 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.204499 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.204559 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.204616 kernel: pci 0000:00:02.0: supports D1 D2 Jun 20 19:30:30.204673 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.204740 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.204799 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.204873 kernel: pci 0000:00:03.0: supports D1 D2 Jun 20 19:30:30.204933 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.204998 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.205057 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.205114 kernel: pci 0000:00:04.0: supports D1 D2 Jun 20 19:30:30.205173 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.205182 kernel: acpiphp: Slot [1-1] registered Jun 20 19:30:30.205189 kernel: acpiphp: Slot [2-1] registered Jun 20 19:30:30.205197 kernel: acpiphp: Slot [3-1] registered Jun 20 19:30:30.205204 kernel: acpiphp: Slot [4-1] registered Jun 20 19:30:30.205254 kernel: pci_bus 0000:00: on NUMA node 0 Jun 20 19:30:30.205312 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.205370 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.205428 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.205486 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.205543 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.205600 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.205657 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.205714 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.205770 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.205829 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.205891 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.205949 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.206006 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff]: assigned Jun 20 19:30:30.206063 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref]: assigned Jun 20 19:30:30.206119 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff]: assigned Jun 20 19:30:30.206176 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref]: assigned Jun 20 19:30:30.206235 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff]: assigned Jun 20 19:30:30.206292 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref]: assigned Jun 20 19:30:30.206350 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff]: assigned Jun 20 19:30:30.206407 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref]: assigned Jun 20 19:30:30.206465 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.206522 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.206578 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.206636 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.206693 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.206750 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.206808 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.206869 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.206927 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.206984 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.207042 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.207100 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.207158 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.207215 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.207272 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.207329 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.207388 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.207444 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jun 20 19:30:30.207502 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jun 20 19:30:30.207559 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.207616 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jun 20 19:30:30.207673 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jun 20 19:30:30.207730 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.207790 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jun 20 19:30:30.207849 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jun 20 19:30:30.207907 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.207964 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jun 20 19:30:30.208021 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jun 20 19:30:30.208072 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jun 20 19:30:30.208122 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jun 20 19:30:30.208186 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jun 20 19:30:30.208239 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jun 20 19:30:30.208301 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jun 20 19:30:30.208354 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jun 20 19:30:30.208421 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jun 20 19:30:30.208473 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jun 20 19:30:30.208537 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jun 20 19:30:30.208590 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jun 20 19:30:30.208599 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jun 20 19:30:30.208661 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.208716 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.208771 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.208826 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.208884 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.208938 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jun 20 19:30:30.208948 kernel: PCI host bridge to bus 0005:00 Jun 20 19:30:30.209005 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jun 20 19:30:30.209056 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jun 20 19:30:30.209107 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jun 20 19:30:30.209172 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.209239 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.209297 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.209354 kernel: pci 0005:00:01.0: supports D1 D2 Jun 20 19:30:30.209411 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.209476 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.209533 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jun 20 19:30:30.209592 kernel: pci 0005:00:03.0: supports D1 D2 Jun 20 19:30:30.209649 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.209713 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.209771 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jun 20 19:30:30.209828 kernel: pci 0005:00:05.0: bridge window [mem 0x30100000-0x301fffff] Jun 20 19:30:30.209889 kernel: pci 0005:00:05.0: supports D1 D2 Jun 20 19:30:30.209947 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.210013 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.210070 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jun 20 19:30:30.210127 kernel: pci 0005:00:07.0: bridge window [mem 0x30000000-0x300fffff] Jun 20 19:30:30.210184 kernel: pci 0005:00:07.0: supports D1 D2 Jun 20 19:30:30.210240 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.210249 kernel: acpiphp: Slot [1-2] registered Jun 20 19:30:30.210257 kernel: acpiphp: Slot [2-2] registered Jun 20 19:30:30.210321 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jun 20 19:30:30.210385 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30110000-0x30113fff 64bit] Jun 20 19:30:30.210444 kernel: pci 0005:03:00.0: ROM [mem 0x30100000-0x3010ffff pref] Jun 20 19:30:30.210509 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jun 20 19:30:30.210568 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30010000-0x30013fff 64bit] Jun 20 19:30:30.210626 kernel: pci 0005:04:00.0: ROM [mem 0x30000000-0x3000ffff pref] Jun 20 19:30:30.210677 kernel: pci_bus 0005:00: on NUMA node 0 Jun 20 19:30:30.210735 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.210794 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.210855 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.210913 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.210970 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.211029 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.211088 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.211146 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.211205 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jun 20 19:30:30.211263 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.211320 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.211377 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jun 20 19:30:30.211434 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff]: assigned Jun 20 19:30:30.211491 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref]: assigned Jun 20 19:30:30.211548 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff]: assigned Jun 20 19:30:30.211606 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref]: assigned Jun 20 19:30:30.211663 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff]: assigned Jun 20 19:30:30.211722 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref]: assigned Jun 20 19:30:30.211778 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff]: assigned Jun 20 19:30:30.211835 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref]: assigned Jun 20 19:30:30.211895 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.211952 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212011 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212068 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212125 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212182 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212239 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212296 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212353 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212410 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212468 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212525 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212582 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212639 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212696 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.212753 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.212810 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.212872 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jun 20 19:30:30.212929 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jun 20 19:30:30.212986 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jun 20 19:30:30.213043 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jun 20 19:30:30.213100 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jun 20 19:30:30.213160 kernel: pci 0005:03:00.0: ROM [mem 0x30400000-0x3040ffff pref]: assigned Jun 20 19:30:30.213219 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30410000-0x30413fff 64bit]: assigned Jun 20 19:30:30.213278 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jun 20 19:30:30.213335 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jun 20 19:30:30.213393 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jun 20 19:30:30.213453 kernel: pci 0005:04:00.0: ROM [mem 0x30600000-0x3060ffff pref]: assigned Jun 20 19:30:30.213512 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30610000-0x30613fff 64bit]: assigned Jun 20 19:30:30.213570 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jun 20 19:30:30.213626 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jun 20 19:30:30.213683 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jun 20 19:30:30.213737 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jun 20 19:30:30.213788 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jun 20 19:30:30.213851 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jun 20 19:30:30.213905 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jun 20 19:30:30.213974 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jun 20 19:30:30.214027 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jun 20 19:30:30.214088 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jun 20 19:30:30.214143 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jun 20 19:30:30.214204 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jun 20 19:30:30.214257 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jun 20 19:30:30.214267 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jun 20 19:30:30.214328 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.214386 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.214440 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.214496 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.214550 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.214604 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jun 20 19:30:30.214614 kernel: PCI host bridge to bus 0003:00 Jun 20 19:30:30.214671 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jun 20 19:30:30.214724 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jun 20 19:30:30.214774 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jun 20 19:30:30.214838 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.214906 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.214963 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.215021 kernel: pci 0003:00:01.0: supports D1 D2 Jun 20 19:30:30.215078 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.215143 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.215201 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jun 20 19:30:30.215258 kernel: pci 0003:00:03.0: supports D1 D2 Jun 20 19:30:30.215314 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.215378 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.215435 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jun 20 19:30:30.215492 kernel: pci 0003:00:05.0: bridge window [io 0x0000-0x0fff] Jun 20 19:30:30.215551 kernel: pci 0003:00:05.0: bridge window [mem 0x10000000-0x100fffff] Jun 20 19:30:30.215608 kernel: pci 0003:00:05.0: bridge window [mem 0x240000000000-0x2400000fffff 64bit pref] Jun 20 19:30:30.215665 kernel: pci 0003:00:05.0: supports D1 D2 Jun 20 19:30:30.215723 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.215732 kernel: acpiphp: Slot [1-3] registered Jun 20 19:30:30.215740 kernel: acpiphp: Slot [2-3] registered Jun 20 19:30:30.215804 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jun 20 19:30:30.215865 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10020000-0x1003ffff] Jun 20 19:30:30.215926 kernel: pci 0003:03:00.0: BAR 2 [io 0x0020-0x003f] Jun 20 19:30:30.215984 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10044000-0x10047fff] Jun 20 19:30:30.216042 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:30.216100 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x240000063fff 64bit pref] Jun 20 19:30:30.216158 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x24000007ffff 64bit pref]: contains BAR 0 for 8 VFs Jun 20 19:30:30.216218 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x240000043fff 64bit pref] Jun 20 19:30:30.216276 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x24000005ffff 64bit pref]: contains BAR 3 for 8 VFs Jun 20 19:30:30.216337 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jun 20 19:30:30.216405 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jun 20 19:30:30.216464 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10000000-0x1001ffff] Jun 20 19:30:30.216522 kernel: pci 0003:03:00.1: BAR 2 [io 0x0000-0x001f] Jun 20 19:30:30.216580 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10040000-0x10043fff] Jun 20 19:30:30.216639 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jun 20 19:30:30.216697 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x240000023fff 64bit pref] Jun 20 19:30:30.216757 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x24000003ffff 64bit pref]: contains BAR 0 for 8 VFs Jun 20 19:30:30.216815 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x240000003fff 64bit pref] Jun 20 19:30:30.216910 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x24000001ffff 64bit pref]: contains BAR 3 for 8 VFs Jun 20 19:30:30.216977 kernel: pci_bus 0003:00: on NUMA node 0 Jun 20 19:30:30.217039 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.217097 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.217155 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.217215 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.217273 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.217330 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.217388 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Jun 20 19:30:30.217445 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Jun 20 19:30:30.217502 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jun 20 19:30:30.217560 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref]: assigned Jun 20 19:30:30.217617 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff]: assigned Jun 20 19:30:30.217676 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref]: assigned Jun 20 19:30:30.217735 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff]: assigned Jun 20 19:30:30.217794 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref]: assigned Jun 20 19:30:30.217855 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.217913 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.217970 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.218027 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.218086 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.218143 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.218200 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.218259 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.218316 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.218373 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.218430 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.218488 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.218545 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.218602 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jun 20 19:30:30.218659 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jun 20 19:30:30.218716 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jun 20 19:30:30.218773 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jun 20 19:30:30.218830 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jun 20 19:30:30.218893 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10400000-0x1041ffff]: assigned Jun 20 19:30:30.218953 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10420000-0x1043ffff]: assigned Jun 20 19:30:30.219012 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10440000-0x10443fff]: assigned Jun 20 19:30:30.219071 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000400000-0x24000041ffff 64bit pref]: assigned Jun 20 19:30:30.219131 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000420000-0x24000043ffff 64bit pref]: assigned Jun 20 19:30:30.219190 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10444000-0x10447fff]: assigned Jun 20 19:30:30.219249 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000440000-0x24000045ffff 64bit pref]: assigned Jun 20 19:30:30.219309 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000460000-0x24000047ffff 64bit pref]: assigned Jun 20 19:30:30.219368 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jun 20 19:30:30.219427 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jun 20 19:30:30.219485 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jun 20 19:30:30.219544 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jun 20 19:30:30.219603 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jun 20 19:30:30.219661 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jun 20 19:30:30.219721 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jun 20 19:30:30.219780 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jun 20 19:30:30.219837 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jun 20 19:30:30.219897 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jun 20 19:30:30.219954 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Jun 20 19:30:30.220006 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Jun 20 19:30:30.220057 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jun 20 19:30:30.220108 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jun 20 19:30:30.220172 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jun 20 19:30:30.220225 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jun 20 19:30:30.220295 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jun 20 19:30:30.220349 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jun 20 19:30:30.220409 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jun 20 19:30:30.220462 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Jun 20 19:30:30.220473 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jun 20 19:30:30.220535 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.220592 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.220646 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.220701 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.220755 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.220810 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jun 20 19:30:30.220822 kernel: PCI host bridge to bus 000c:00 Jun 20 19:30:30.220884 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jun 20 19:30:30.220936 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jun 20 19:30:30.220987 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jun 20 19:30:30.221051 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.221116 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.221176 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.221235 kernel: pci 000c:00:01.0: enabling Extended Tags Jun 20 19:30:30.221293 kernel: pci 000c:00:01.0: supports D1 D2 Jun 20 19:30:30.221350 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.221414 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.221472 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.221529 kernel: pci 000c:00:02.0: supports D1 D2 Jun 20 19:30:30.221586 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.221651 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.221710 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.221767 kernel: pci 000c:00:03.0: supports D1 D2 Jun 20 19:30:30.221827 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.221895 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.221953 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.222011 kernel: pci 000c:00:04.0: supports D1 D2 Jun 20 19:30:30.222068 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.222079 kernel: acpiphp: Slot [1-4] registered Jun 20 19:30:30.222087 kernel: acpiphp: Slot [2-4] registered Jun 20 19:30:30.222094 kernel: acpiphp: Slot [3-2] registered Jun 20 19:30:30.222102 kernel: acpiphp: Slot [4-2] registered Jun 20 19:30:30.222152 kernel: pci_bus 000c:00: on NUMA node 0 Jun 20 19:30:30.222210 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.222268 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.222326 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.222385 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.222442 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.222500 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.222557 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.222614 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.222671 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.222728 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.222787 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.222845 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.222905 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff]: assigned Jun 20 19:30:30.222963 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref]: assigned Jun 20 19:30:30.223020 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff]: assigned Jun 20 19:30:30.223077 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref]: assigned Jun 20 19:30:30.223134 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff]: assigned Jun 20 19:30:30.223193 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref]: assigned Jun 20 19:30:30.223250 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff]: assigned Jun 20 19:30:30.223308 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref]: assigned Jun 20 19:30:30.223365 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.223422 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.223479 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.223536 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.223593 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.223651 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.223708 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.223765 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.223822 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.223882 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.223939 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.223996 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.224053 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.224112 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.224169 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.224225 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.224283 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.224341 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jun 20 19:30:30.224398 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jun 20 19:30:30.224456 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.224517 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jun 20 19:30:30.224574 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jun 20 19:30:30.224631 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.224689 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jun 20 19:30:30.224745 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jun 20 19:30:30.224805 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.224865 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jun 20 19:30:30.224923 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jun 20 19:30:30.224975 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jun 20 19:30:30.225025 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jun 20 19:30:30.225087 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jun 20 19:30:30.225142 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jun 20 19:30:30.225203 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jun 20 19:30:30.225256 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jun 20 19:30:30.225324 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jun 20 19:30:30.225377 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jun 20 19:30:30.225438 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jun 20 19:30:30.225493 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jun 20 19:30:30.225502 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jun 20 19:30:30.225566 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.225622 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.225677 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.225732 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.225786 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.225842 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jun 20 19:30:30.225855 kernel: PCI host bridge to bus 0002:00 Jun 20 19:30:30.225913 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jun 20 19:30:30.225964 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jun 20 19:30:30.226015 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jun 20 19:30:30.226079 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.226144 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.226204 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.226261 kernel: pci 0002:00:01.0: supports D1 D2 Jun 20 19:30:30.226318 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.226381 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.226439 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jun 20 19:30:30.226497 kernel: pci 0002:00:03.0: supports D1 D2 Jun 20 19:30:30.226554 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.226619 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.226678 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jun 20 19:30:30.226735 kernel: pci 0002:00:05.0: supports D1 D2 Jun 20 19:30:30.226792 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.226859 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.226917 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jun 20 19:30:30.226974 kernel: pci 0002:00:07.0: supports D1 D2 Jun 20 19:30:30.227033 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.227043 kernel: acpiphp: Slot [1-5] registered Jun 20 19:30:30.227051 kernel: acpiphp: Slot [2-5] registered Jun 20 19:30:30.227059 kernel: acpiphp: Slot [3-3] registered Jun 20 19:30:30.227066 kernel: acpiphp: Slot [4-3] registered Jun 20 19:30:30.227115 kernel: pci_bus 0002:00: on NUMA node 0 Jun 20 19:30:30.227173 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.227231 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.227290 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jun 20 19:30:30.227348 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.227406 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.227464 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.227522 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.227580 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.227638 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.227696 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.227755 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.227813 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.227875 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff]: assigned Jun 20 19:30:30.227932 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref]: assigned Jun 20 19:30:30.227991 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff]: assigned Jun 20 19:30:30.228048 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref]: assigned Jun 20 19:30:30.228107 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff]: assigned Jun 20 19:30:30.228165 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref]: assigned Jun 20 19:30:30.228224 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff]: assigned Jun 20 19:30:30.228281 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref]: assigned Jun 20 19:30:30.228338 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.228395 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.228452 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.228509 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.228568 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.228625 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.228682 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.228739 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.228796 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.228856 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.228915 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.228973 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.229030 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.229089 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.229147 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.229203 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.229260 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.229317 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jun 20 19:30:30.229375 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jun 20 19:30:30.229432 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jun 20 19:30:30.229491 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jun 20 19:30:30.229549 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jun 20 19:30:30.229606 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jun 20 19:30:30.229663 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jun 20 19:30:30.229720 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jun 20 19:30:30.229777 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jun 20 19:30:30.229836 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jun 20 19:30:30.229896 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jun 20 19:30:30.229949 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jun 20 19:30:30.230000 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jun 20 19:30:30.230061 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jun 20 19:30:30.230115 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jun 20 19:30:30.230175 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jun 20 19:30:30.230230 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jun 20 19:30:30.230290 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jun 20 19:30:30.230343 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jun 20 19:30:30.230411 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jun 20 19:30:30.230464 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jun 20 19:30:30.230474 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jun 20 19:30:30.230535 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.230593 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.230647 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.230702 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.230756 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.230811 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jun 20 19:30:30.230822 kernel: PCI host bridge to bus 0001:00 Jun 20 19:30:30.230884 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jun 20 19:30:30.230936 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jun 20 19:30:30.230986 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jun 20 19:30:30.231050 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.231116 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.231174 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.231233 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jun 20 19:30:30.231290 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jun 20 19:30:30.231348 kernel: pci 0001:00:01.0: enabling Extended Tags Jun 20 19:30:30.231406 kernel: pci 0001:00:01.0: supports D1 D2 Jun 20 19:30:30.231463 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.231527 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.231586 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.231644 kernel: pci 0001:00:02.0: supports D1 D2 Jun 20 19:30:30.231701 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.231767 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.231824 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.231885 kernel: pci 0001:00:03.0: supports D1 D2 Jun 20 19:30:30.231942 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.232006 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.232066 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.232123 kernel: pci 0001:00:04.0: supports D1 D2 Jun 20 19:30:30.232180 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.232189 kernel: acpiphp: Slot [1-6] registered Jun 20 19:30:30.232253 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jun 20 19:30:30.232313 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref] Jun 20 19:30:30.232371 kernel: pci 0001:01:00.0: ROM [mem 0x60100000-0x601fffff pref] Jun 20 19:30:30.232430 kernel: pci 0001:01:00.0: PME# supported from D3cold Jun 20 19:30:30.232492 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 20 19:30:30.232558 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jun 20 19:30:30.232617 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref] Jun 20 19:30:30.232676 kernel: pci 0001:01:00.1: ROM [mem 0x60000000-0x600fffff pref] Jun 20 19:30:30.232734 kernel: pci 0001:01:00.1: PME# supported from D3cold Jun 20 19:30:30.232744 kernel: acpiphp: Slot [2-6] registered Jun 20 19:30:30.232752 kernel: acpiphp: Slot [3-4] registered Jun 20 19:30:30.232761 kernel: acpiphp: Slot [4-4] registered Jun 20 19:30:30.232811 kernel: pci_bus 0001:00: on NUMA node 0 Jun 20 19:30:30.232878 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:30:30.232939 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:30:30.232997 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.233055 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:30:30.233112 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.233169 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.233230 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.233288 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.233345 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.233403 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.233460 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref]: assigned Jun 20 19:30:30.233517 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff]: assigned Jun 20 19:30:30.233575 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff]: assigned Jun 20 19:30:30.233633 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref]: assigned Jun 20 19:30:30.233691 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff]: assigned Jun 20 19:30:30.233748 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref]: assigned Jun 20 19:30:30.233805 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff]: assigned Jun 20 19:30:30.233874 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref]: assigned Jun 20 19:30:30.233933 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.233992 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234051 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234109 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234166 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234223 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234280 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234338 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234395 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234452 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234509 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234567 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234625 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234682 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234739 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.234798 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.234861 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref]: assigned Jun 20 19:30:30.234922 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref]: assigned Jun 20 19:30:30.234984 kernel: pci 0001:01:00.0: ROM [mem 0x60000000-0x600fffff pref]: assigned Jun 20 19:30:30.235044 kernel: pci 0001:01:00.1: ROM [mem 0x60100000-0x601fffff pref]: assigned Jun 20 19:30:30.235102 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:30.235159 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jun 20 19:30:30.235216 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jun 20 19:30:30.235274 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jun 20 19:30:30.235331 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jun 20 19:30:30.235390 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Jun 20 19:30:30.235450 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.235507 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jun 20 19:30:30.235565 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Jun 20 19:30:30.235622 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jun 20 19:30:30.235680 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jun 20 19:30:30.235737 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Jun 20 19:30:30.235791 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jun 20 19:30:30.235842 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jun 20 19:30:30.235907 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jun 20 19:30:30.235963 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Jun 20 19:30:30.236031 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jun 20 19:30:30.236084 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Jun 20 19:30:30.236145 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jun 20 19:30:30.236199 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Jun 20 19:30:30.236260 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jun 20 19:30:30.236312 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Jun 20 19:30:30.236322 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jun 20 19:30:30.236383 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:30.236441 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jun 20 19:30:30.236498 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jun 20 19:30:30.236552 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jun 20 19:30:30.236607 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jun 20 19:30:30.236661 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jun 20 19:30:30.236671 kernel: PCI host bridge to bus 0004:00 Jun 20 19:30:30.236729 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jun 20 19:30:30.236781 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jun 20 19:30:30.236832 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jun 20 19:30:30.236899 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:30.236964 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.237022 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jun 20 19:30:30.237079 kernel: pci 0004:00:01.0: bridge window [io 0x0000-0x0fff] Jun 20 19:30:30.237137 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x220fffff] Jun 20 19:30:30.237193 kernel: pci 0004:00:01.0: supports D1 D2 Jun 20 19:30:30.237252 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.237315 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.237373 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.237431 kernel: pci 0004:00:03.0: bridge window [mem 0x22200000-0x222fffff] Jun 20 19:30:30.237488 kernel: pci 0004:00:03.0: supports D1 D2 Jun 20 19:30:30.237545 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.237610 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:30.237670 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jun 20 19:30:30.237727 kernel: pci 0004:00:05.0: supports D1 D2 Jun 20 19:30:30.237784 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jun 20 19:30:30.237852 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jun 20 19:30:30.237912 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jun 20 19:30:30.237971 kernel: pci 0004:01:00.0: bridge window [io 0x0000-0x0fff] Jun 20 19:30:30.238029 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x220fffff] Jun 20 19:30:30.238089 kernel: pci 0004:01:00.0: enabling Extended Tags Jun 20 19:30:30.238147 kernel: pci 0004:01:00.0: supports D1 D2 Jun 20 19:30:30.238206 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 19:30:30.238269 kernel: pci_bus 0004:02: extended config space not accessible Jun 20 19:30:30.238337 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:30:30.238398 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff] Jun 20 19:30:30.238460 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff] Jun 20 19:30:30.238522 kernel: pci 0004:02:00.0: BAR 2 [io 0x0000-0x007f] Jun 20 19:30:30.238583 kernel: pci 0004:02:00.0: supports D1 D2 Jun 20 19:30:30.238643 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 19:30:30.238716 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 PCIe Endpoint Jun 20 19:30:30.238776 kernel: pci 0004:03:00.0: BAR 0 [mem 0x22200000-0x22201fff 64bit] Jun 20 19:30:30.238835 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:30.238889 kernel: pci_bus 0004:00: on NUMA node 0 Jun 20 19:30:30.238950 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jun 20 19:30:30.239008 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:30:30.239068 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:30.239127 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jun 20 19:30:30.239185 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:30:30.239242 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.239300 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:30:30.239359 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jun 20 19:30:30.239417 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref]: assigned Jun 20 19:30:30.239476 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff]: assigned Jun 20 19:30:30.239534 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref]: assigned Jun 20 19:30:30.239591 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff]: assigned Jun 20 19:30:30.239648 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref]: assigned Jun 20 19:30:30.239705 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.239762 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.239822 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.239882 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.239940 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.239997 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.240054 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.240111 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.240168 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.240226 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.240285 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.240342 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.240401 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jun 20 19:30:30.240461 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:30.240520 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:30.240580 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff]: assigned Jun 20 19:30:30.240642 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff]: assigned Jun 20 19:30:30.240702 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: can't assign; no space Jun 20 19:30:30.240764 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: failed to assign Jun 20 19:30:30.240823 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jun 20 19:30:30.240886 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jun 20 19:30:30.240943 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jun 20 19:30:30.241001 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jun 20 19:30:30.241058 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jun 20 19:30:30.241117 kernel: pci 0004:03:00.0: BAR 0 [mem 0x23000000-0x23001fff 64bit]: assigned Jun 20 19:30:30.241177 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jun 20 19:30:30.241233 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jun 20 19:30:30.241291 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jun 20 19:30:30.241348 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jun 20 19:30:30.241408 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jun 20 19:30:30.241465 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jun 20 19:30:30.241517 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jun 20 19:30:30.241571 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jun 20 19:30:30.241622 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jun 20 19:30:30.241683 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jun 20 19:30:30.241737 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jun 20 19:30:30.241793 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jun 20 19:30:30.241856 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jun 20 19:30:30.241909 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jun 20 19:30:30.241971 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jun 20 19:30:30.242024 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jun 20 19:30:30.242034 kernel: ACPI: CPU18 has been hot-added Jun 20 19:30:30.242042 kernel: ACPI: CPU58 has been hot-added Jun 20 19:30:30.242049 kernel: ACPI: CPU38 has been hot-added Jun 20 19:30:30.242057 kernel: ACPI: CPU78 has been hot-added Jun 20 19:30:30.242064 kernel: ACPI: CPU16 has been hot-added Jun 20 19:30:30.242072 kernel: ACPI: CPU56 has been hot-added Jun 20 19:30:30.242081 kernel: ACPI: CPU36 has been hot-added Jun 20 19:30:30.242088 kernel: ACPI: CPU76 has been hot-added Jun 20 19:30:30.242096 kernel: ACPI: CPU17 has been hot-added Jun 20 19:30:30.242103 kernel: ACPI: CPU57 has been hot-added Jun 20 19:30:30.242111 kernel: ACPI: CPU37 has been hot-added Jun 20 19:30:30.242118 kernel: ACPI: CPU77 has been hot-added Jun 20 19:30:30.242126 kernel: ACPI: CPU19 has been hot-added Jun 20 19:30:30.242134 kernel: ACPI: CPU59 has been hot-added Jun 20 19:30:30.242141 kernel: ACPI: CPU39 has been hot-added Jun 20 19:30:30.242150 kernel: ACPI: CPU79 has been hot-added Jun 20 19:30:30.242159 kernel: ACPI: CPU12 has been hot-added Jun 20 19:30:30.242166 kernel: ACPI: CPU52 has been hot-added Jun 20 19:30:30.242174 kernel: ACPI: CPU32 has been hot-added Jun 20 19:30:30.242181 kernel: ACPI: CPU72 has been hot-added Jun 20 19:30:30.242189 kernel: ACPI: CPU8 has been hot-added Jun 20 19:30:30.242197 kernel: ACPI: CPU48 has been hot-added Jun 20 19:30:30.242205 kernel: ACPI: CPU28 has been hot-added Jun 20 19:30:30.242213 kernel: ACPI: CPU68 has been hot-added Jun 20 19:30:30.242221 kernel: ACPI: CPU10 has been hot-added Jun 20 19:30:30.242231 kernel: ACPI: CPU50 has been hot-added Jun 20 19:30:30.242238 kernel: ACPI: CPU30 has been hot-added Jun 20 19:30:30.242246 kernel: ACPI: CPU70 has been hot-added Jun 20 19:30:30.242253 kernel: ACPI: CPU14 has been hot-added Jun 20 19:30:30.242261 kernel: ACPI: CPU54 has been hot-added Jun 20 19:30:30.242268 kernel: ACPI: CPU34 has been hot-added Jun 20 19:30:30.242276 kernel: ACPI: CPU74 has been hot-added Jun 20 19:30:30.242283 kernel: ACPI: CPU4 has been hot-added Jun 20 19:30:30.242291 kernel: ACPI: CPU44 has been hot-added Jun 20 19:30:30.242299 kernel: ACPI: CPU24 has been hot-added Jun 20 19:30:30.242307 kernel: ACPI: CPU64 has been hot-added Jun 20 19:30:30.242314 kernel: ACPI: CPU0 has been hot-added Jun 20 19:30:30.242322 kernel: ACPI: CPU40 has been hot-added Jun 20 19:30:30.242329 kernel: ACPI: CPU20 has been hot-added Jun 20 19:30:30.242337 kernel: ACPI: CPU60 has been hot-added Jun 20 19:30:30.242344 kernel: ACPI: CPU2 has been hot-added Jun 20 19:30:30.242352 kernel: ACPI: CPU42 has been hot-added Jun 20 19:30:30.242359 kernel: ACPI: CPU22 has been hot-added Jun 20 19:30:30.242367 kernel: ACPI: CPU62 has been hot-added Jun 20 19:30:30.242376 kernel: ACPI: CPU6 has been hot-added Jun 20 19:30:30.242384 kernel: ACPI: CPU46 has been hot-added Jun 20 19:30:30.242391 kernel: ACPI: CPU26 has been hot-added Jun 20 19:30:30.242399 kernel: ACPI: CPU66 has been hot-added Jun 20 19:30:30.242406 kernel: ACPI: CPU5 has been hot-added Jun 20 19:30:30.242414 kernel: ACPI: CPU45 has been hot-added Jun 20 19:30:30.242421 kernel: ACPI: CPU25 has been hot-added Jun 20 19:30:30.242429 kernel: ACPI: CPU65 has been hot-added Jun 20 19:30:30.242437 kernel: ACPI: CPU1 has been hot-added Jun 20 19:30:30.242446 kernel: ACPI: CPU41 has been hot-added Jun 20 19:30:30.242453 kernel: ACPI: CPU21 has been hot-added Jun 20 19:30:30.242461 kernel: ACPI: CPU61 has been hot-added Jun 20 19:30:30.242468 kernel: ACPI: CPU3 has been hot-added Jun 20 19:30:30.242475 kernel: ACPI: CPU43 has been hot-added Jun 20 19:30:30.242483 kernel: ACPI: CPU23 has been hot-added Jun 20 19:30:30.242490 kernel: ACPI: CPU63 has been hot-added Jun 20 19:30:30.242498 kernel: ACPI: CPU7 has been hot-added Jun 20 19:30:30.242505 kernel: ACPI: CPU47 has been hot-added Jun 20 19:30:30.242514 kernel: ACPI: CPU27 has been hot-added Jun 20 19:30:30.242522 kernel: ACPI: CPU67 has been hot-added Jun 20 19:30:30.242529 kernel: ACPI: CPU13 has been hot-added Jun 20 19:30:30.242537 kernel: ACPI: CPU53 has been hot-added Jun 20 19:30:30.242545 kernel: ACPI: CPU33 has been hot-added Jun 20 19:30:30.242552 kernel: ACPI: CPU73 has been hot-added Jun 20 19:30:30.242560 kernel: ACPI: CPU9 has been hot-added Jun 20 19:30:30.242567 kernel: ACPI: CPU49 has been hot-added Jun 20 19:30:30.242575 kernel: ACPI: CPU29 has been hot-added Jun 20 19:30:30.242582 kernel: ACPI: CPU69 has been hot-added Jun 20 19:30:30.242591 kernel: ACPI: CPU11 has been hot-added Jun 20 19:30:30.242599 kernel: ACPI: CPU51 has been hot-added Jun 20 19:30:30.242606 kernel: ACPI: CPU31 has been hot-added Jun 20 19:30:30.242614 kernel: ACPI: CPU71 has been hot-added Jun 20 19:30:30.242621 kernel: ACPI: CPU15 has been hot-added Jun 20 19:30:30.242630 kernel: ACPI: CPU55 has been hot-added Jun 20 19:30:30.242638 kernel: ACPI: CPU35 has been hot-added Jun 20 19:30:30.242645 kernel: ACPI: CPU75 has been hot-added Jun 20 19:30:30.242653 kernel: iommu: Default domain type: Translated Jun 20 19:30:30.242662 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 19:30:30.242670 kernel: efivars: Registered efivars operations Jun 20 19:30:30.242734 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jun 20 19:30:30.242796 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jun 20 19:30:30.242859 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jun 20 19:30:30.242869 kernel: vgaarb: loaded Jun 20 19:30:30.242877 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 19:30:30.242884 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:30:30.242892 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:30:30.242901 kernel: pnp: PnP ACPI init Jun 20 19:30:30.242966 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jun 20 19:30:30.243020 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jun 20 19:30:30.243072 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jun 20 19:30:30.243124 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jun 20 19:30:30.243176 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jun 20 19:30:30.243228 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jun 20 19:30:30.243299 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jun 20 19:30:30.243353 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jun 20 19:30:30.243363 kernel: pnp: PnP ACPI: found 1 devices Jun 20 19:30:30.243371 kernel: NET: Registered PF_INET protocol family Jun 20 19:30:30.243378 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.243386 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jun 20 19:30:30.243394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:30:30.243402 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:30:30.243411 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.243419 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jun 20 19:30:30.243427 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.243434 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:30:30.243442 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:30:30.243504 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jun 20 19:30:30.243514 kernel: kvm [1]: nv: 554 coarse grained trap handlers Jun 20 19:30:30.243522 kernel: kvm [1]: IPA Size Limit: 48 bits Jun 20 19:30:30.243530 kernel: kvm [1]: GICv3: no GICV resource entry Jun 20 19:30:30.243539 kernel: kvm [1]: disabling GICv2 emulation Jun 20 19:30:30.243547 kernel: kvm [1]: GIC system register CPU interface enabled Jun 20 19:30:30.243554 kernel: kvm [1]: vgic interrupt IRQ9 Jun 20 19:30:30.243562 kernel: kvm [1]: VHE mode initialized successfully Jun 20 19:30:30.243569 kernel: Initialise system trusted keyrings Jun 20 19:30:30.243577 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jun 20 19:30:30.243584 kernel: Key type asymmetric registered Jun 20 19:30:30.243592 kernel: Asymmetric key parser 'x509' registered Jun 20 19:30:30.243599 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 20 19:30:30.243609 kernel: io scheduler mq-deadline registered Jun 20 19:30:30.243616 kernel: io scheduler kyber registered Jun 20 19:30:30.243624 kernel: io scheduler bfq registered Jun 20 19:30:30.243631 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 20 19:30:30.243639 kernel: ACPI: button: Power Button [PWRB] Jun 20 19:30:30.243647 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jun 20 19:30:30.243655 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:30:30.243720 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jun 20 19:30:30.243775 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.243831 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.243893 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.243947 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Jun 20 19:30:30.244000 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Jun 20 19:30:30.244061 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jun 20 19:30:30.244115 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.244172 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.244225 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.244278 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Jun 20 19:30:30.244331 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Jun 20 19:30:30.244392 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jun 20 19:30:30.244445 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.244500 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.244554 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.244606 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Jun 20 19:30:30.244659 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Jun 20 19:30:30.244721 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jun 20 19:30:30.244774 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.244827 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.244888 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.244941 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Jun 20 19:30:30.244997 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Jun 20 19:30:30.245058 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jun 20 19:30:30.245112 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.245165 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.245218 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.245273 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Jun 20 19:30:30.245326 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Jun 20 19:30:30.245386 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jun 20 19:30:30.245440 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.245493 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.245546 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.245601 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Jun 20 19:30:30.245654 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Jun 20 19:30:30.245722 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jun 20 19:30:30.245776 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.245829 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.245887 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.245943 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Jun 20 19:30:30.245998 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Jun 20 19:30:30.246058 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jun 20 19:30:30.246112 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jun 20 19:30:30.246165 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jun 20 19:30:30.246218 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Jun 20 19:30:30.246271 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Jun 20 19:30:30.246325 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Jun 20 19:30:30.246335 kernel: thunder_xcv, ver 1.0 Jun 20 19:30:30.246343 kernel: thunder_bgx, ver 1.0 Jun 20 19:30:30.246350 kernel: nicpf, ver 1.0 Jun 20 19:30:30.246358 kernel: nicvf, ver 1.0 Jun 20 19:30:30.246418 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 19:30:30.246473 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T19:30:28 UTC (1750447828) Jun 20 19:30:30.246482 kernel: efifb: probing for efifb Jun 20 19:30:30.246492 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jun 20 19:30:30.246499 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jun 20 19:30:30.246507 kernel: efifb: scrolling: redraw Jun 20 19:30:30.246515 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:30:30.246523 kernel: Console: switching to colour frame buffer device 100x37 Jun 20 19:30:30.246530 kernel: fb0: EFI VGA frame buffer device Jun 20 19:30:30.246538 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jun 20 19:30:30.246545 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:30:30.246553 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 20 19:30:30.246562 kernel: watchdog: NMI not fully supported Jun 20 19:30:30.246570 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:30:30.246577 kernel: watchdog: Hard watchdog permanently disabled Jun 20 19:30:30.246585 kernel: Segment Routing with IPv6 Jun 20 19:30:30.246593 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:30:30.246601 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:30:30.246609 kernel: Key type dns_resolver registered Jun 20 19:30:30.246616 kernel: registered taskstats version 1 Jun 20 19:30:30.246624 kernel: Loading compiled-in X.509 certificates Jun 20 19:30:30.246633 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 4dab98fc4de70d482d00f54d1877f6231fc25377' Jun 20 19:30:30.246640 kernel: Demotion targets for Node 0: null Jun 20 19:30:30.246648 kernel: Key type .fscrypt registered Jun 20 19:30:30.246656 kernel: Key type fscrypt-provisioning registered Jun 20 19:30:30.246663 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:30:30.246671 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:30:30.246679 kernel: ima: No architecture policies found Jun 20 19:30:30.246686 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 19:30:30.246746 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jun 20 19:30:30.246806 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jun 20 19:30:30.246869 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jun 20 19:30:30.246927 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jun 20 19:30:30.246987 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jun 20 19:30:30.247045 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jun 20 19:30:30.247105 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jun 20 19:30:30.247164 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jun 20 19:30:30.247223 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jun 20 19:30:30.247284 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jun 20 19:30:30.247345 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jun 20 19:30:30.247403 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jun 20 19:30:30.247461 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jun 20 19:30:30.247519 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jun 20 19:30:30.247579 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jun 20 19:30:30.247637 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jun 20 19:30:30.247696 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jun 20 19:30:30.247754 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jun 20 19:30:30.247814 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jun 20 19:30:30.247876 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jun 20 19:30:30.247935 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jun 20 19:30:30.247993 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jun 20 19:30:30.248052 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jun 20 19:30:30.248110 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jun 20 19:30:30.248169 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jun 20 19:30:30.248227 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jun 20 19:30:30.248287 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jun 20 19:30:30.248345 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jun 20 19:30:30.248404 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jun 20 19:30:30.248462 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jun 20 19:30:30.248521 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jun 20 19:30:30.248579 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jun 20 19:30:30.248637 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jun 20 19:30:30.248695 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jun 20 19:30:30.248756 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jun 20 19:30:30.248813 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jun 20 19:30:30.248876 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jun 20 19:30:30.248935 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jun 20 19:30:30.248995 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jun 20 19:30:30.249052 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jun 20 19:30:30.249111 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jun 20 19:30:30.249170 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jun 20 19:30:30.249230 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jun 20 19:30:30.249288 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jun 20 19:30:30.249347 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jun 20 19:30:30.249404 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jun 20 19:30:30.249463 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jun 20 19:30:30.249521 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jun 20 19:30:30.249580 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jun 20 19:30:30.249640 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jun 20 19:30:30.249699 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jun 20 19:30:30.249759 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jun 20 19:30:30.249819 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jun 20 19:30:30.249881 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jun 20 19:30:30.249939 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jun 20 19:30:30.249997 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jun 20 19:30:30.250056 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jun 20 19:30:30.250113 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jun 20 19:30:30.250171 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jun 20 19:30:30.250231 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jun 20 19:30:30.250291 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jun 20 19:30:30.250301 kernel: clk: Disabling unused clocks Jun 20 19:30:30.250309 kernel: PM: genpd: Disabling unused power domains Jun 20 19:30:30.250317 kernel: Warning: unable to open an initial console. Jun 20 19:30:30.250325 kernel: Freeing unused kernel memory: 39424K Jun 20 19:30:30.250332 kernel: Run /init as init process Jun 20 19:30:30.250340 kernel: with arguments: Jun 20 19:30:30.250347 kernel: /init Jun 20 19:30:30.250356 kernel: with environment: Jun 20 19:30:30.250364 kernel: HOME=/ Jun 20 19:30:30.250371 kernel: TERM=linux Jun 20 19:30:30.250378 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:30:30.250387 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:30:30.250398 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:30:30.250406 systemd[1]: Detected architecture arm64. Jun 20 19:30:30.250415 systemd[1]: Running in initrd. Jun 20 19:30:30.250423 systemd[1]: No hostname configured, using default hostname. Jun 20 19:30:30.250431 systemd[1]: Hostname set to . Jun 20 19:30:30.250439 systemd[1]: Initializing machine ID from random generator. Jun 20 19:30:30.250447 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:30:30.250455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:30:30.250463 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:30:30.250471 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:30:30.250481 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:30:30.250489 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:30:30.250497 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:30:30.250507 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:30:30.250515 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:30:30.250523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:30:30.250531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:30:30.250540 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:30:30.250548 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:30:30.250556 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:30:30.250564 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:30:30.250572 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:30:30.250580 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:30:30.250588 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:30:30.250596 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:30:30.250603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:30:30.250613 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:30:30.250620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:30:30.250628 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:30:30.250636 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:30:30.250645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:30:30.250652 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:30:30.250661 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:30:30.250669 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:30:30.250678 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:30:30.250686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:30:30.250694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:30.250723 systemd-journald[909]: Collecting audit messages is disabled. Jun 20 19:30:30.250743 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:30:30.250751 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:30:30.250759 kernel: Bridge firewalling registered Jun 20 19:30:30.250768 systemd-journald[909]: Journal started Jun 20 19:30:30.250786 systemd-journald[909]: Runtime Journal (/run/log/journal/112d1bb4969d43848197982b8ab5a873) is 8M, max 4G, 3.9G free. Jun 20 19:30:30.187523 systemd-modules-load[911]: Inserted module 'overlay' Jun 20 19:30:30.275319 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:30:30.243452 systemd-modules-load[911]: Inserted module 'br_netfilter' Jun 20 19:30:30.280949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:30:30.291782 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:30:30.302672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:30:30.313456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:30.327995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:30:30.336064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:30:30.363560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:30:30.370321 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:30:30.382643 systemd-tmpfiles[940]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:30:30.388660 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:30:30.404377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:30:30.420802 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:30:30.431956 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:30:30.451525 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:30:30.486170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:30:30.499539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:30:30.512615 dracut-cmdline[959]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 19:30:30.520177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:30:30.525950 systemd-resolved[962]: Positive Trust Anchors: Jun 20 19:30:30.525959 systemd-resolved[962]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:30:30.525995 systemd-resolved[962]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:30:30.541510 systemd-resolved[962]: Defaulting to hostname 'linux'. Jun 20 19:30:30.557852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:30:30.578041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:30:30.681864 kernel: SCSI subsystem initialized Jun 20 19:30:30.696862 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:30:30.716857 kernel: iscsi: registered transport (tcp) Jun 20 19:30:30.744372 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:30:30.744396 kernel: QLogic iSCSI HBA Driver Jun 20 19:30:30.763122 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:30:30.797908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:30:30.814327 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:30:30.865447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:30:30.877054 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:30:30.959862 kernel: raid6: neonx8 gen() 15861 MB/s Jun 20 19:30:30.985854 kernel: raid6: neonx4 gen() 15881 MB/s Jun 20 19:30:31.010858 kernel: raid6: neonx2 gen() 13264 MB/s Jun 20 19:30:31.035858 kernel: raid6: neonx1 gen() 10472 MB/s Jun 20 19:30:31.060858 kernel: raid6: int64x8 gen() 6931 MB/s Jun 20 19:30:31.085859 kernel: raid6: int64x4 gen() 7352 MB/s Jun 20 19:30:31.110857 kernel: raid6: int64x2 gen() 6130 MB/s Jun 20 19:30:31.139155 kernel: raid6: int64x1 gen() 5077 MB/s Jun 20 19:30:31.139176 kernel: raid6: using algorithm neonx4 gen() 15881 MB/s Jun 20 19:30:31.173593 kernel: raid6: .... xor() 12376 MB/s, rmw enabled Jun 20 19:30:31.173614 kernel: raid6: using neon recovery algorithm Jun 20 19:30:31.198257 kernel: xor: measuring software checksum speed Jun 20 19:30:31.198279 kernel: 8regs : 21641 MB/sec Jun 20 19:30:31.206580 kernel: 32regs : 21693 MB/sec Jun 20 19:30:31.214930 kernel: arm64_neon : 28215 MB/sec Jun 20 19:30:31.222936 kernel: xor: using function: arm64_neon (28215 MB/sec) Jun 20 19:30:31.288859 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:30:31.294948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:30:31.301654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:30:31.346641 systemd-udevd[1185]: Using default interface naming scheme 'v255'. Jun 20 19:30:31.350613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:30:31.356717 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:30:31.394823 dracut-pre-trigger[1196]: rd.md=0: removing MD RAID activation Jun 20 19:30:31.417169 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:30:31.426756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:30:31.724906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:30:31.860070 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:30:31.860088 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:30:31.860098 kernel: ACPI: bus type USB registered Jun 20 19:30:31.860108 kernel: nvme 0005:03:00.0: Adding to iommu group 31 Jun 20 19:30:31.860258 kernel: usbcore: registered new interface driver usbfs Jun 20 19:30:31.860269 kernel: usbcore: registered new interface driver hub Jun 20 19:30:31.860278 kernel: usbcore: registered new device driver usb Jun 20 19:30:31.860287 kernel: nvme 0005:04:00.0: Adding to iommu group 32 Jun 20 19:30:31.860374 kernel: nvme nvme0: pci function 0005:03:00.0 Jun 20 19:30:31.860460 kernel: nvme nvme1: pci function 0005:04:00.0 Jun 20 19:30:31.860534 kernel: PTP clock support registered Jun 20 19:30:31.860544 kernel: nvme nvme0: D3 entry latency set to 8 seconds Jun 20 19:30:31.860606 kernel: nvme nvme1: D3 entry latency set to 8 seconds Jun 20 19:30:31.850296 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:30:31.889686 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jun 20 19:30:31.889836 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jun 20 19:30:31.867188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:30:31.957491 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:30:31.957506 kernel: GPT:9289727 != 1875385007 Jun 20 19:30:31.957518 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:30:31.957527 kernel: GPT:9289727 != 1875385007 Jun 20 19:30:31.957535 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:30:31.957544 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:30:31.867248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:32.044257 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 33 Jun 20 19:30:32.044392 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jun 20 19:30:32.044469 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jun 20 19:30:32.044546 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jun 20 19:30:32.044618 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jun 20 19:30:32.044628 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jun 20 19:30:32.044637 kernel: igb 0003:03:00.0: Adding to iommu group 34 Jun 20 19:30:31.894818 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:32.064257 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Jun 20 19:30:32.039865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:32.059770 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:30:32.083062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jun 20 19:30:32.100881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jun 20 19:30:32.118865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:32.135026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jun 20 19:30:32.158034 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jun 20 19:30:32.163724 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jun 20 19:30:32.182103 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:30:32.447405 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000000100000010 Jun 20 19:30:32.447550 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jun 20 19:30:32.447625 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jun 20 19:30:32.447696 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jun 20 19:30:32.447766 kernel: hub 1-0:1.0: USB hub found Jun 20 19:30:32.447867 kernel: hub 1-0:1.0: 4 ports detected Jun 20 19:30:32.447947 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 20 19:30:32.448071 kernel: hub 2-0:1.0: USB hub found Jun 20 19:30:32.448159 kernel: hub 2-0:1.0: 4 ports detected Jun 20 19:30:32.448236 kernel: mlx5_core 0001:01:00.0: PTM is not supported by PCIe Jun 20 19:30:32.448315 kernel: mlx5_core 0001:01:00.0: firmware version: 14.30.1004 Jun 20 19:30:32.448387 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 20 19:30:32.448457 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:30:32.448467 kernel: igb 0003:03:00.0: added PHC on eth0 Jun 20 19:30:32.448548 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:30:32.448557 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jun 20 19:30:32.448627 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6c Jun 20 19:30:32.448697 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jun 20 19:30:32.448766 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jun 20 19:30:32.448835 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jun 20 19:30:32.449003 disk-uuid[1336]: Primary Header is updated. Jun 20 19:30:32.449003 disk-uuid[1336]: Secondary Entries is updated. Jun 20 19:30:32.449003 disk-uuid[1336]: Secondary Header is updated. Jun 20 19:30:32.487487 kernel: igb 0003:03:00.1: added PHC on eth1 Jun 20 19:30:32.487693 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jun 20 19:30:32.498831 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6d Jun 20 19:30:32.510350 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jun 20 19:30:32.519804 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jun 20 19:30:32.533858 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jun 20 19:30:32.533943 kernel: mlx5_core 0001:01:00.0: E-Switch: Total vports 2, per vport: max uc(1024) max mc(16384) Jun 20 19:30:32.560858 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jun 20 19:30:32.560883 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jun 20 19:30:32.570863 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jun 20 19:30:32.703860 kernel: hub 1-3:1.0: USB hub found Jun 20 19:30:32.712854 kernel: hub 1-3:1.0: 4 ports detected Jun 20 19:30:32.812862 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jun 20 19:30:32.839860 kernel: hub 2-3:1.0: USB hub found Jun 20 19:30:32.848852 kernel: hub 2-3:1.0: 4 ports detected Jun 20 19:30:32.890860 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 20 19:30:32.902861 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jun 20 19:30:32.918172 kernel: mlx5_core 0001:01:00.1: PTM is not supported by PCIe Jun 20 19:30:32.918321 kernel: mlx5_core 0001:01:00.1: firmware version: 14.30.1004 Jun 20 19:30:32.932203 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 20 19:30:33.212863 kernel: mlx5_core 0001:01:00.1: E-Switch: Total vports 2, per vport: max uc(1024) max mc(16384) Jun 20 19:30:33.230450 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Jun 20 19:30:33.335859 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:30:33.336237 disk-uuid[1339]: The operation has completed successfully. Jun 20 19:30:33.536863 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 20 19:30:33.551870 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jun 20 19:30:33.551957 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jun 20 19:30:33.595980 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:30:33.596077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:30:33.606796 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:30:33.615642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:30:33.623430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:30:33.637718 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:30:33.648997 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:30:33.658695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:30:33.687674 sh[1535]: Success Jun 20 19:30:33.696280 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:30:33.747497 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:30:33.747514 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:30:33.747528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:30:33.767857 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 20 19:30:33.799830 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:30:33.812319 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:30:33.838910 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:30:33.848854 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:30:33.848868 kernel: BTRFS: device fsid eac9c4a0-5098-4f12-a7ad-af09956ff0e3 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (1558) Jun 20 19:30:33.848877 kernel: BTRFS info (device dm-0): first mount of filesystem eac9c4a0-5098-4f12-a7ad-af09956ff0e3 Jun 20 19:30:33.848887 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:30:33.848898 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:30:33.938400 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:30:33.949922 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:30:33.961198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:30:33.962241 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:30:33.997421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:30:34.027278 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:6) scanned by mount (1586) Jun 20 19:30:34.027297 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 19:30:34.064433 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:30:34.078722 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:30:34.112856 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 19:30:34.114574 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:30:34.126142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:30:34.134196 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:30:34.158238 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:30:34.192250 systemd-networkd[1737]: lo: Link UP Jun 20 19:30:34.192256 systemd-networkd[1737]: lo: Gained carrier Jun 20 19:30:34.195778 systemd-networkd[1737]: Enumeration completed Jun 20 19:30:34.195856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:30:34.197078 systemd-networkd[1737]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:30:34.204353 systemd[1]: Reached target network.target - Network. Jun 20 19:30:34.250234 systemd-networkd[1737]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:30:34.271834 ignition[1735]: Ignition 2.21.0 Jun 20 19:30:34.271842 ignition[1735]: Stage: fetch-offline Jun 20 19:30:34.271880 ignition[1735]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:34.277524 unknown[1735]: fetched base config from "system" Jun 20 19:30:34.271888 ignition[1735]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jun 20 19:30:34.277531 unknown[1735]: fetched user config from "system" Jun 20 19:30:34.272069 ignition[1735]: parsed url from cmdline: "" Jun 20 19:30:34.280697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:30:34.272072 ignition[1735]: no config URL provided Jun 20 19:30:34.288226 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:30:34.272076 ignition[1735]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:30:34.289351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:30:34.272129 ignition[1735]: parsing config with SHA512: c874b9ddf6b593f5b3ed6230ed6809dd7c5924b76476e4b23ff2aec28564f43010ae347df6ea602664af9a79d353dbf3aaeb2b33a81f4cb06c94a600bce33e94 Jun 20 19:30:34.303516 systemd-networkd[1737]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:30:34.278203 ignition[1735]: fetch-offline: fetch-offline passed Jun 20 19:30:34.278207 ignition[1735]: POST message to Packet Timeline Jun 20 19:30:34.278212 ignition[1735]: POST Status error: resource requires networking Jun 20 19:30:34.278287 ignition[1735]: Ignition finished successfully Jun 20 19:30:34.342262 ignition[1771]: Ignition 2.21.0 Jun 20 19:30:34.342268 ignition[1771]: Stage: kargs Jun 20 19:30:34.342454 ignition[1771]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:34.342461 ignition[1771]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jun 20 19:30:34.348787 ignition[1771]: kargs: kargs passed Jun 20 19:30:34.348793 ignition[1771]: POST message to Packet Timeline Jun 20 19:30:34.349215 ignition[1771]: GET https://metadata.packet.net/metadata: attempt #1 Jun 20 19:30:34.353041 ignition[1771]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35208->[::1]:53: read: connection refused Jun 20 19:30:34.553169 ignition[1771]: GET https://metadata.packet.net/metadata: attempt #2 Jun 20 19:30:34.553741 ignition[1771]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43579->[::1]:53: read: connection refused Jun 20 19:30:34.861862 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jun 20 19:30:34.864494 systemd-networkd[1737]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:30:34.954307 ignition[1771]: GET https://metadata.packet.net/metadata: attempt #3 Jun 20 19:30:34.954801 ignition[1771]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33223->[::1]:53: read: connection refused Jun 20 19:30:35.460865 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jun 20 19:30:35.464242 systemd-networkd[1737]: eno1: Link UP Jun 20 19:30:35.464372 systemd-networkd[1737]: eno2: Link UP Jun 20 19:30:35.464489 systemd-networkd[1737]: enP1p1s0f0np0: Link UP Jun 20 19:30:35.464620 systemd-networkd[1737]: enP1p1s0f0np0: Gained carrier Jun 20 19:30:35.485067 systemd-networkd[1737]: enP1p1s0f1np1: Link UP Jun 20 19:30:35.486392 systemd-networkd[1737]: enP1p1s0f1np1: Gained carrier Jun 20 19:30:35.529891 systemd-networkd[1737]: enP1p1s0f0np0: DHCPv4 address 147.28.145.50/30, gateway 147.28.145.49 acquired from 147.28.144.140 Jun 20 19:30:35.754941 ignition[1771]: GET https://metadata.packet.net/metadata: attempt #4 Jun 20 19:30:35.755558 ignition[1771]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47154->[::1]:53: read: connection refused Jun 20 19:30:36.778202 systemd-networkd[1737]: enP1p1s0f0np0: Gained IPv6LL Jun 20 19:30:37.356377 ignition[1771]: GET https://metadata.packet.net/metadata: attempt #5 Jun 20 19:30:37.356982 ignition[1771]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47142->[::1]:53: read: connection refused Jun 20 19:30:37.418060 systemd-networkd[1737]: enP1p1s0f1np1: Gained IPv6LL Jun 20 19:30:40.560375 ignition[1771]: GET https://metadata.packet.net/metadata: attempt #6 Jun 20 19:30:41.117299 ignition[1771]: GET result: OK Jun 20 19:30:41.451046 ignition[1771]: Ignition finished successfully Jun 20 19:30:41.454821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:30:41.457704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:30:41.499262 ignition[1795]: Ignition 2.21.0 Jun 20 19:30:41.499271 ignition[1795]: Stage: disks Jun 20 19:30:41.499416 ignition[1795]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:41.499425 ignition[1795]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jun 20 19:30:41.503068 ignition[1795]: disks: disks passed Jun 20 19:30:41.503074 ignition[1795]: POST message to Packet Timeline Jun 20 19:30:41.503095 ignition[1795]: GET https://metadata.packet.net/metadata: attempt #1 Jun 20 19:30:42.100519 ignition[1795]: GET result: OK Jun 20 19:30:42.408131 ignition[1795]: Ignition finished successfully Jun 20 19:30:42.412140 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:30:42.416715 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:30:42.424345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:30:42.432498 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:30:42.440866 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:30:42.449721 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:30:42.459981 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:30:42.501814 systemd-fsck[1818]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:30:42.505865 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:30:42.513220 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:30:42.612870 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 40d60ae8-3eda-4465-8dd7-9dbfcfd71664 r/w with ordered data mode. Quota mode: none. Jun 20 19:30:42.613443 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:30:42.623647 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:30:42.634575 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:30:42.658351 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:30:42.734372 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 (259:6) scanned by mount (1831) Jun 20 19:30:42.734392 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 19:30:42.734402 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:30:42.734412 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:30:42.664890 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:30:42.754416 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jun 20 19:30:42.766282 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:30:42.766318 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:30:42.788182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:30:42.796606 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:30:42.813294 coreos-metadata[1833]: Jun 20 19:30:42.800 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jun 20 19:30:42.829844 coreos-metadata[1850]: Jun 20 19:30:42.800 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jun 20 19:30:42.810483 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:30:42.868581 initrd-setup-root[1867]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:30:42.874895 initrd-setup-root[1874]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:30:42.881413 initrd-setup-root[1882]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:30:42.887642 initrd-setup-root[1889]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:30:42.958346 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:30:42.970286 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:30:42.999372 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:30:43.007854 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 19:30:43.032137 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:30:43.048936 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:30:43.064290 ignition[1963]: INFO : Ignition 2.21.0 Jun 20 19:30:43.064290 ignition[1963]: INFO : Stage: mount Jun 20 19:30:43.075303 ignition[1963]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:43.075303 ignition[1963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jun 20 19:30:43.075303 ignition[1963]: INFO : mount: mount passed Jun 20 19:30:43.075303 ignition[1963]: INFO : POST message to Packet Timeline Jun 20 19:30:43.075303 ignition[1963]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jun 20 19:30:43.342991 coreos-metadata[1850]: Jun 20 19:30:43.342 INFO Fetch successful Jun 20 19:30:43.388426 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jun 20 19:30:43.388616 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jun 20 19:30:43.414948 coreos-metadata[1833]: Jun 20 19:30:43.414 INFO Fetch successful Jun 20 19:30:43.456697 coreos-metadata[1833]: Jun 20 19:30:43.456 INFO wrote hostname ci-4344.1.0-a-403d322406 to /sysroot/etc/hostname Jun 20 19:30:43.460113 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:30:43.589056 ignition[1963]: INFO : GET result: OK Jun 20 19:30:43.895451 ignition[1963]: INFO : Ignition finished successfully Jun 20 19:30:43.899951 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:30:43.906427 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:30:43.936927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:30:43.968854 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 (259:6) scanned by mount (1996) Jun 20 19:30:43.993120 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 19:30:43.993142 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:30:44.006279 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:30:44.015265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:30:44.060643 ignition[2013]: INFO : Ignition 2.21.0 Jun 20 19:30:44.060643 ignition[2013]: INFO : Stage: files Jun 20 19:30:44.070340 ignition[2013]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:44.070340 ignition[2013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jun 20 19:30:44.070340 ignition[2013]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:30:44.070340 ignition[2013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:30:44.070340 ignition[2013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:30:44.070340 ignition[2013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:30:44.070340 ignition[2013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:30:44.070340 ignition[2013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:30:44.070340 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 19:30:44.070340 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 20 19:30:44.065406 unknown[2013]: wrote ssh authorized keys file for user: core Jun 20 19:30:44.203081 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:30:44.348025 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:30:44.358748 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 20 19:30:44.612958 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:30:44.969178 ignition[2013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 19:30:44.969178 ignition[2013]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:30:44.993626 ignition[2013]: INFO : files: files passed Jun 20 19:30:44.993626 ignition[2013]: INFO : POST message to Packet Timeline Jun 20 19:30:44.993626 ignition[2013]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jun 20 19:30:45.523632 ignition[2013]: INFO : GET result: OK Jun 20 19:30:46.005846 ignition[2013]: INFO : Ignition finished successfully Jun 20 19:30:46.008368 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:30:46.019134 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:30:46.049484 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:30:46.067814 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:30:46.068905 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:30:46.085502 initrd-setup-root-after-ignition[2058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:30:46.085502 initrd-setup-root-after-ignition[2058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:30:46.081866 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:30:46.135663 initrd-setup-root-after-ignition[2062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:30:46.092728 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:30:46.109060 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:30:46.166415 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:30:46.166599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:30:46.177860 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:30:46.193724 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:30:46.204659 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:30:46.205651 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:30:46.242434 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:30:46.254958 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:30:46.289887 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:30:46.301506 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:30:46.313093 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:30:46.318877 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:30:46.318979 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:30:46.330345 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:30:46.341415 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:30:46.352650 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:30:46.363819 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:30:46.374876 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:30:46.385972 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:30:46.397086 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:30:46.408200 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:30:46.419267 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:30:46.430346 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:30:46.446876 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:30:46.457955 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:30:46.458051 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:30:46.469325 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:30:46.480273 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:30:46.491063 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:30:46.491926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:30:46.502113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:30:46.502219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:30:46.513239 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:30:46.513332 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:30:46.524317 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:30:46.535415 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:30:46.538882 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:30:46.552252 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:30:46.563539 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:30:46.574952 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:30:46.575041 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:30:46.586465 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:30:46.690933 ignition[2088]: INFO : Ignition 2.21.0 Jun 20 19:30:46.690933 ignition[2088]: INFO : Stage: umount Jun 20 19:30:46.690933 ignition[2088]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:46.690933 ignition[2088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jun 20 19:30:46.690933 ignition[2088]: INFO : umount: umount passed Jun 20 19:30:46.690933 ignition[2088]: INFO : POST message to Packet Timeline Jun 20 19:30:46.690933 ignition[2088]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jun 20 19:30:46.586555 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:30:46.598007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:30:46.598100 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:30:46.609476 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:30:46.609564 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:30:46.620964 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:30:46.621048 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:30:46.638836 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:30:46.649634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:30:46.649736 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:30:46.664485 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:30:46.672655 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:30:46.672758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:30:46.685151 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:30:46.685236 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:30:46.699126 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:30:46.700993 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:30:46.701073 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:30:46.746653 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:30:46.746906 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:30:47.210384 ignition[2088]: INFO : GET result: OK Jun 20 19:30:48.037061 ignition[2088]: INFO : Ignition finished successfully Jun 20 19:30:48.039403 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:30:48.039676 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:30:48.047055 systemd[1]: Stopped target network.target - Network. Jun 20 19:30:48.056022 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:30:48.056098 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:30:48.065393 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:30:48.065440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:30:48.074835 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:30:48.074889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:30:48.084443 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:30:48.084498 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:30:48.094189 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:30:48.094269 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:30:48.103984 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:30:48.113701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:30:48.123585 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:30:48.124897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:30:48.137335 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:30:48.138147 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:30:48.138407 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:30:48.151003 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:30:48.157645 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:30:48.157824 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:30:48.165325 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:30:48.165779 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:30:48.173936 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:30:48.174000 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:30:48.184538 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:30:48.194003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:30:48.194058 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:30:48.204264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:30:48.204305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:30:48.219725 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:30:48.219779 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:30:48.230263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:30:48.246938 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:30:48.251253 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:30:48.252554 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:30:48.263198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:30:48.263263 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:30:48.273749 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:30:48.273808 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:30:48.284761 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:30:48.284826 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:30:48.301445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:30:48.301482 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:30:48.312432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:30:48.312490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:30:48.324576 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:30:48.335217 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:30:48.335289 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:30:48.346768 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:30:48.346808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:30:48.358536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:30:48.358585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:48.377975 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:30:48.378050 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:30:48.378080 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:30:48.378413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:30:48.378484 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:30:48.878336 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:30:48.878509 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:30:48.889862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:30:48.900900 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:30:48.934308 systemd[1]: Switching root. Jun 20 19:30:48.993433 systemd-journald[909]: Journal stopped Jun 20 19:30:51.174194 systemd-journald[909]: Received SIGTERM from PID 1 (systemd). Jun 20 19:30:51.174222 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:30:51.174234 kernel: SELinux: policy capability open_perms=1 Jun 20 19:30:51.174242 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:30:51.174250 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:30:51.174258 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:30:51.174266 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:30:51.174276 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:30:51.174284 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:30:51.174291 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:30:51.174299 kernel: audit: type=1403 audit(1750447849.200:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:30:51.174309 systemd[1]: Successfully loaded SELinux policy in 140.916ms. Jun 20 19:30:51.174318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.770ms. Jun 20 19:30:51.174329 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:30:51.174339 systemd[1]: Detected architecture arm64. Jun 20 19:30:51.174348 systemd[1]: Detected first boot. Jun 20 19:30:51.174356 systemd[1]: Hostname set to . Jun 20 19:30:51.174366 systemd[1]: Initializing machine ID from random generator. Jun 20 19:30:51.174374 zram_generator::config[2156]: No configuration found. Jun 20 19:30:51.174386 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:30:51.174395 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:30:51.174404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:30:51.174413 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:30:51.174422 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:30:51.174431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:30:51.174440 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:30:51.174450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:30:51.174459 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:30:51.174468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:30:51.174478 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:30:51.174487 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:30:51.174496 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:30:51.174505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:30:51.174514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:30:51.174527 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:30:51.174536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:30:51.174545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:30:51.174555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:30:51.174564 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 19:30:51.174573 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:30:51.174585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:30:51.174594 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:30:51.174604 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:30:51.174614 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:30:51.174623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:30:51.174632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:30:51.174642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:30:51.174651 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:30:51.174660 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:30:51.174670 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:30:51.174680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:30:51.174690 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:30:51.174700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:30:51.174709 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:30:51.174719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:30:51.174729 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:30:51.174738 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:30:51.174748 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:30:51.174757 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:30:51.174766 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:30:51.174776 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:30:51.174785 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:30:51.174796 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:30:51.174805 systemd[1]: Reached target machines.target - Containers. Jun 20 19:30:51.174815 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:30:51.174824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:30:51.174833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:30:51.174843 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:30:51.174855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:30:51.174865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:30:51.174874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:30:51.174885 kernel: ACPI: bus type drm_connector registered Jun 20 19:30:51.174894 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:30:51.174903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:30:51.174912 kernel: fuse: init (API version 7.41) Jun 20 19:30:51.174920 kernel: loop: module loaded Jun 20 19:30:51.174930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:30:51.174940 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:30:51.174949 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:30:51.174959 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:30:51.174969 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:30:51.174979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:30:51.174989 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:30:51.174998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:30:51.175007 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:30:51.175017 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:30:51.175044 systemd-journald[2261]: Collecting audit messages is disabled. Jun 20 19:30:51.175068 systemd-journald[2261]: Journal started Jun 20 19:30:51.175087 systemd-journald[2261]: Runtime Journal (/run/log/journal/e09ac1e80d48476484f3443615fa5cb8) is 8M, max 4G, 3.9G free. Jun 20 19:30:49.761475 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:30:49.783584 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:30:49.783913 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:30:49.784215 systemd[1]: systemd-journald.service: Consumed 3.484s CPU time. Jun 20 19:30:51.197864 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:30:51.218896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:30:51.242052 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:30:51.242069 systemd[1]: Stopped verity-setup.service. Jun 20 19:30:51.267862 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:30:51.273096 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:30:51.278635 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:30:51.284093 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:30:51.289466 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:30:51.294898 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:30:51.300248 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:30:51.305684 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:30:51.311178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:30:51.316690 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:30:51.316883 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:30:51.322217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:30:51.322973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:30:51.328345 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:30:51.328517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:30:51.333705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:30:51.335882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:30:51.341106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:30:51.341274 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:30:51.346645 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:30:51.346818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:30:51.351932 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:30:51.357195 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:30:51.363878 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:30:51.369164 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:30:51.384015 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:30:51.390388 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:30:51.410674 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:30:51.415552 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:30:51.415583 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:30:51.421184 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:30:51.427020 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:30:51.431968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:30:51.433386 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:30:51.439072 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:30:51.443844 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:30:51.444901 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:30:51.449628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:30:51.449948 systemd-journald[2261]: Time spent on flushing to /var/log/journal/e09ac1e80d48476484f3443615fa5cb8 is 25.532ms for 2475 entries. Jun 20 19:30:51.449948 systemd-journald[2261]: System Journal (/var/log/journal/e09ac1e80d48476484f3443615fa5cb8) is 8M, max 195.6M, 187.6M free. Jun 20 19:30:51.492473 systemd-journald[2261]: Received client request to flush runtime journal. Jun 20 19:30:51.492515 kernel: loop0: detected capacity change from 0 to 211168 Jun 20 19:30:51.450734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:30:51.467979 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:30:51.473752 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:30:51.479815 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:30:51.495350 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:30:51.495855 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:30:51.510054 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:30:51.515874 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:30:51.520551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:30:51.525351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:30:51.529987 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:30:51.537878 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:30:51.543549 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:30:51.546862 kernel: loop1: detected capacity change from 0 to 8 Jun 20 19:30:51.579219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:30:51.586668 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:30:51.587483 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:30:51.601714 systemd-tmpfiles[2325]: ACLs are not supported, ignoring. Jun 20 19:30:51.601726 systemd-tmpfiles[2325]: ACLs are not supported, ignoring. Jun 20 19:30:51.604856 kernel: loop2: detected capacity change from 0 to 107312 Jun 20 19:30:51.605253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:30:51.688860 kernel: loop3: detected capacity change from 0 to 138376 Jun 20 19:30:51.693623 ldconfig[2294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:30:51.695280 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:30:51.756866 kernel: loop4: detected capacity change from 0 to 211168 Jun 20 19:30:51.774859 kernel: loop5: detected capacity change from 0 to 8 Jun 20 19:30:51.786861 kernel: loop6: detected capacity change from 0 to 107312 Jun 20 19:30:51.802859 kernel: loop7: detected capacity change from 0 to 138376 Jun 20 19:30:51.810622 (sd-merge)[2345]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jun 20 19:30:51.811060 (sd-merge)[2345]: Merged extensions into '/usr'. Jun 20 19:30:51.814323 systemd[1]: Reload requested from client PID 2302 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:30:51.814335 systemd[1]: Reloading... Jun 20 19:30:51.852856 zram_generator::config[2371]: No configuration found. Jun 20 19:30:51.934121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:30:52.017743 systemd[1]: Reloading finished in 203 ms. Jun 20 19:30:52.047223 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:30:52.052182 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:30:52.080196 systemd[1]: Starting ensure-sysext.service... Jun 20 19:30:52.086141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:30:52.092933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:30:52.103954 systemd[1]: Reload requested from client PID 2425 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:30:52.103965 systemd[1]: Reloading... Jun 20 19:30:52.104759 systemd-tmpfiles[2426]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:30:52.104782 systemd-tmpfiles[2426]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:30:52.105015 systemd-tmpfiles[2426]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:30:52.105192 systemd-tmpfiles[2426]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:30:52.105771 systemd-tmpfiles[2426]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:30:52.106009 systemd-tmpfiles[2426]: ACLs are not supported, ignoring. Jun 20 19:30:52.106052 systemd-tmpfiles[2426]: ACLs are not supported, ignoring. Jun 20 19:30:52.109139 systemd-tmpfiles[2426]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:30:52.109146 systemd-tmpfiles[2426]: Skipping /boot Jun 20 19:30:52.117877 systemd-tmpfiles[2426]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:30:52.117884 systemd-tmpfiles[2426]: Skipping /boot Jun 20 19:30:52.121844 systemd-udevd[2427]: Using default interface naming scheme 'v255'. Jun 20 19:30:52.147858 zram_generator::config[2454]: No configuration found. Jun 20 19:30:52.200865 kernel: IPMI message handler: version 39.2 Jun 20 19:30:52.210869 kernel: ipmi device interface Jun 20 19:30:52.223858 kernel: ipmi_ssif: IPMI SSIF Interface driver Jun 20 19:30:52.232587 kernel: ipmi_si: IPMI System Interface driver Jun 20 19:30:52.232644 kernel: MACsec IEEE 802.1AE Jun 20 19:30:52.232657 kernel: ipmi_si: Unable to find any System Interface(s) Jun 20 19:30:52.237481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:30:52.340287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jun 20 19:30:52.345101 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 19:30:52.345194 systemd[1]: Reloading finished in 240 ms. Jun 20 19:30:52.372135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:30:52.400464 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:30:52.422943 systemd[1]: Finished ensure-sysext.service. Jun 20 19:30:52.445336 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:30:52.466908 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:30:52.471732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:30:52.472639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:30:52.478379 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:30:52.484193 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:30:52.489897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:30:52.494696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:30:52.495556 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:30:52.500334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:30:52.501458 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:30:52.503857 augenrules[2702]: No rules Jun 20 19:30:52.508029 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:30:52.514561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:30:52.520715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:30:52.526206 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:30:52.531585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:52.536751 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:30:52.537539 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:30:52.543639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:30:52.548195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:30:52.548356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:30:52.552716 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:30:52.552881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:30:52.558131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:30:52.558317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:30:52.563225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:30:52.563385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:30:52.569058 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:30:52.574409 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:30:52.579552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:52.592530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:30:52.592648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:30:52.593960 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:30:52.619094 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:30:52.623564 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:30:52.623989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:30:52.629382 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:30:52.656476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:30:52.714078 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:30:52.718805 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:30:52.722190 systemd-resolved[2709]: Positive Trust Anchors: Jun 20 19:30:52.722205 systemd-resolved[2709]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:30:52.722236 systemd-resolved[2709]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:30:52.725720 systemd-resolved[2709]: Using system hostname 'ci-4344.1.0-a-403d322406'. Jun 20 19:30:52.727106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:30:52.728037 systemd-networkd[2708]: lo: Link UP Jun 20 19:30:52.728043 systemd-networkd[2708]: lo: Gained carrier Jun 20 19:30:52.731474 systemd-networkd[2708]: bond0: netdev ready Jun 20 19:30:52.731567 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:30:52.735874 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:30:52.740180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:30:52.740471 systemd-networkd[2708]: Enumeration completed Jun 20 19:30:52.744464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:30:52.748887 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:30:52.749328 systemd-networkd[2708]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:06:d8.network. Jun 20 19:30:52.753231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:30:52.757484 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:30:52.761716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:30:52.761739 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:30:52.765925 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:30:52.770864 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:30:52.776437 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:30:52.782716 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:30:52.791694 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:30:52.796563 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:30:52.801352 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:30:52.805835 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:30:52.810290 systemd[1]: Reached target network.target - Network. Jun 20 19:30:52.814595 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:30:52.818890 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:30:52.823134 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:30:52.823155 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:30:52.824168 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:30:52.838657 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:30:52.844070 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:30:52.849508 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:30:52.854899 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:30:52.860296 coreos-metadata[2752]: Jun 20 19:30:52.860 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jun 20 19:30:52.860350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:30:52.863428 coreos-metadata[2752]: Jun 20 19:30:52.863 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jun 20 19:30:52.864722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:30:52.865113 jq[2757]: false Jun 20 19:30:52.865777 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:30:52.871249 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:30:52.876322 extend-filesystems[2758]: Found /dev/nvme0n1p6 Jun 20 19:30:52.881207 extend-filesystems[2758]: Found /dev/nvme0n1p9 Jun 20 19:30:52.876744 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:30:52.890290 extend-filesystems[2758]: Checking size of /dev/nvme0n1p9 Jun 20 19:30:52.886738 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:30:52.899285 extend-filesystems[2758]: Resized partition /dev/nvme0n1p9 Jun 20 19:30:52.921031 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jun 20 19:30:52.899066 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:30:52.921344 extend-filesystems[2781]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:30:52.917415 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:30:52.926958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:30:52.938199 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:30:52.938738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:30:52.939781 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:30:52.945510 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:30:52.951797 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:30:52.953216 jq[2795]: true Jun 20 19:30:52.957053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:30:52.957249 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:30:52.957490 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:30:52.957673 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:30:52.963372 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:30:52.963570 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:30:52.965059 systemd-logind[2783]: Watching system buttons on /dev/input/event0 (Power Button) Jun 20 19:30:52.968497 systemd-logind[2783]: New seat seat0. Jun 20 19:30:52.970600 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:30:52.975781 update_engine[2794]: I20250620 19:30:52.975648 2794 main.cc:92] Flatcar Update Engine starting Jun 20 19:30:52.976402 (ntainerd)[2800]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:30:52.978969 jq[2799]: true Jun 20 19:30:52.986688 tar[2798]: linux-arm64/LICENSE Jun 20 19:30:52.986855 tar[2798]: linux-arm64/helm Jun 20 19:30:52.994074 dbus-daemon[2753]: [system] SELinux support is enabled Jun 20 19:30:52.995578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:30:52.997852 update_engine[2794]: I20250620 19:30:52.997817 2794 update_check_scheduler.cc:74] Next update check in 6m56s Jun 20 19:30:53.004661 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:30:53.004685 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:30:53.005167 dbus-daemon[2753]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 19:30:53.008311 bash[2827]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:30:53.009624 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:30:53.009640 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:30:53.014685 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:30:53.020286 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:30:53.027052 systemd[1]: Starting sshkeys.service... Jun 20 19:30:53.047279 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:30:53.056771 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:30:53.062620 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:30:53.080714 locksmithd[2830]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:30:53.082154 coreos-metadata[2838]: Jun 20 19:30:53.082 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jun 20 19:30:53.083354 coreos-metadata[2838]: Jun 20 19:30:53.083 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jun 20 19:30:53.133256 containerd[2800]: time="2025-06-20T19:30:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:30:53.134311 containerd[2800]: time="2025-06-20T19:30:53.134283840Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:30:53.142121 containerd[2800]: time="2025-06-20T19:30:53.142092080Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.56µs" Jun 20 19:30:53.142141 containerd[2800]: time="2025-06-20T19:30:53.142122320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:30:53.142157 containerd[2800]: time="2025-06-20T19:30:53.142139600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:30:53.142299 containerd[2800]: time="2025-06-20T19:30:53.142286760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:30:53.142321 containerd[2800]: time="2025-06-20T19:30:53.142302760Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:30:53.142338 containerd[2800]: time="2025-06-20T19:30:53.142325360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142388 containerd[2800]: time="2025-06-20T19:30:53.142375480Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142408 containerd[2800]: time="2025-06-20T19:30:53.142388600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142615 containerd[2800]: time="2025-06-20T19:30:53.142596320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142641 containerd[2800]: time="2025-06-20T19:30:53.142615280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142641 containerd[2800]: time="2025-06-20T19:30:53.142626240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142641 containerd[2800]: time="2025-06-20T19:30:53.142634440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142715 containerd[2800]: time="2025-06-20T19:30:53.142704400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142911 containerd[2800]: time="2025-06-20T19:30:53.142896640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142940 containerd[2800]: time="2025-06-20T19:30:53.142924560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:30:53.142940 containerd[2800]: time="2025-06-20T19:30:53.142935480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:30:53.142983 containerd[2800]: time="2025-06-20T19:30:53.142963920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:30:53.143169 containerd[2800]: time="2025-06-20T19:30:53.143158600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:30:53.143232 containerd[2800]: time="2025-06-20T19:30:53.143222040Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:30:53.150909 containerd[2800]: time="2025-06-20T19:30:53.150878000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:30:53.150966 containerd[2800]: time="2025-06-20T19:30:53.150927440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:30:53.150966 containerd[2800]: time="2025-06-20T19:30:53.150941000Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:30:53.150966 containerd[2800]: time="2025-06-20T19:30:53.150954040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:30:53.150966 containerd[2800]: time="2025-06-20T19:30:53.150965760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.150982000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.150993560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.151005040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.151025800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.151036080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.151045080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:30:53.151091 containerd[2800]: time="2025-06-20T19:30:53.151056560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:30:53.151228 containerd[2800]: time="2025-06-20T19:30:53.151174000Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:30:53.151228 containerd[2800]: time="2025-06-20T19:30:53.151193080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:30:53.151228 containerd[2800]: time="2025-06-20T19:30:53.151208520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:30:53.151228 containerd[2800]: time="2025-06-20T19:30:53.151219640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:30:53.151293 containerd[2800]: time="2025-06-20T19:30:53.151230480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:30:53.151293 containerd[2800]: time="2025-06-20T19:30:53.151241400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:30:53.151293 containerd[2800]: time="2025-06-20T19:30:53.151252320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:30:53.151293 containerd[2800]: time="2025-06-20T19:30:53.151262040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:30:53.151293 containerd[2800]: time="2025-06-20T19:30:53.151272800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:30:53.151293 containerd[2800]: time="2025-06-20T19:30:53.151287360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:30:53.151432 containerd[2800]: time="2025-06-20T19:30:53.151297960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:30:53.151487 containerd[2800]: time="2025-06-20T19:30:53.151475120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:30:53.151510 containerd[2800]: time="2025-06-20T19:30:53.151492080Z" level=info msg="Start snapshots syncer" Jun 20 19:30:53.151529 containerd[2800]: time="2025-06-20T19:30:53.151517080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:30:53.151736 containerd[2800]: time="2025-06-20T19:30:53.151708120Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:30:53.151814 containerd[2800]: time="2025-06-20T19:30:53.151753240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:30:53.151835 containerd[2800]: time="2025-06-20T19:30:53.151821240Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:30:53.151962 containerd[2800]: time="2025-06-20T19:30:53.151947360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:30:53.151989 containerd[2800]: time="2025-06-20T19:30:53.151971400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:30:53.151989 containerd[2800]: time="2025-06-20T19:30:53.151982760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:30:53.152026 containerd[2800]: time="2025-06-20T19:30:53.151996960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:30:53.152026 containerd[2800]: time="2025-06-20T19:30:53.152009680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:30:53.152060 containerd[2800]: time="2025-06-20T19:30:53.152025600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:30:53.152060 containerd[2800]: time="2025-06-20T19:30:53.152037680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:30:53.152095 containerd[2800]: time="2025-06-20T19:30:53.152066040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:30:53.152095 containerd[2800]: time="2025-06-20T19:30:53.152078040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:30:53.152095 containerd[2800]: time="2025-06-20T19:30:53.152088680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:30:53.152144 containerd[2800]: time="2025-06-20T19:30:53.152121360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:30:53.152144 containerd[2800]: time="2025-06-20T19:30:53.152134640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:30:53.152178 containerd[2800]: time="2025-06-20T19:30:53.152142840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:30:53.152178 containerd[2800]: time="2025-06-20T19:30:53.152153520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:30:53.152178 containerd[2800]: time="2025-06-20T19:30:53.152162040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:30:53.152178 containerd[2800]: time="2025-06-20T19:30:53.152171760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:30:53.152245 containerd[2800]: time="2025-06-20T19:30:53.152181720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:30:53.152269 containerd[2800]: time="2025-06-20T19:30:53.152261000Z" level=info msg="runtime interface created" Jun 20 19:30:53.152269 containerd[2800]: time="2025-06-20T19:30:53.152267640Z" level=info msg="created NRI interface" Jun 20 19:30:53.152303 containerd[2800]: time="2025-06-20T19:30:53.152275720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:30:53.152303 containerd[2800]: time="2025-06-20T19:30:53.152286360Z" level=info msg="Connect containerd service" Jun 20 19:30:53.152340 containerd[2800]: time="2025-06-20T19:30:53.152310560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:30:53.153051 containerd[2800]: time="2025-06-20T19:30:53.153030440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:30:53.232656 containerd[2800]: time="2025-06-20T19:30:53.232616040Z" level=info msg="Start subscribing containerd event" Jun 20 19:30:53.232689 containerd[2800]: time="2025-06-20T19:30:53.232675160Z" level=info msg="Start recovering state" Jun 20 19:30:53.232771 containerd[2800]: time="2025-06-20T19:30:53.232761880Z" level=info msg="Start event monitor" Jun 20 19:30:53.232792 containerd[2800]: time="2025-06-20T19:30:53.232777280Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:30:53.232792 containerd[2800]: time="2025-06-20T19:30:53.232787160Z" level=info msg="Start streaming server" Jun 20 19:30:53.232837 containerd[2800]: time="2025-06-20T19:30:53.232796320Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:30:53.232837 containerd[2800]: time="2025-06-20T19:30:53.232803080Z" level=info msg="runtime interface starting up..." Jun 20 19:30:53.232837 containerd[2800]: time="2025-06-20T19:30:53.232808760Z" level=info msg="starting plugins..." Jun 20 19:30:53.232837 containerd[2800]: time="2025-06-20T19:30:53.232821200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:30:53.232937 containerd[2800]: time="2025-06-20T19:30:53.232912280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:30:53.232977 containerd[2800]: time="2025-06-20T19:30:53.232967880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:30:53.233034 containerd[2800]: time="2025-06-20T19:30:53.233026040Z" level=info msg="containerd successfully booted in 0.100152s" Jun 20 19:30:53.233106 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:30:53.319685 tar[2798]: linux-arm64/README.md Jun 20 19:30:53.345890 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:30:53.420868 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jun 20 19:30:53.437944 extend-filesystems[2781]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 19:30:53.437944 extend-filesystems[2781]: old_desc_blocks = 1, new_desc_blocks = 112 Jun 20 19:30:53.437944 extend-filesystems[2781]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jun 20 19:30:53.468792 extend-filesystems[2758]: Resized filesystem in /dev/nvme0n1p9 Jun 20 19:30:53.440281 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:30:53.440624 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:30:53.454290 systemd[1]: extend-filesystems.service: Consumed 205ms CPU time, 68.9M memory peak. Jun 20 19:30:53.830867 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jun 20 19:30:53.847860 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jun 20 19:30:53.848802 systemd-networkd[2708]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:06:d9.network. Jun 20 19:30:53.863539 coreos-metadata[2752]: Jun 20 19:30:53.863 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jun 20 19:30:53.863941 coreos-metadata[2752]: Jun 20 19:30:53.863 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jun 20 19:30:54.024741 sshd_keygen[2786]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:30:54.043703 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:30:54.050898 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:30:54.081333 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:30:54.081524 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:30:54.083442 coreos-metadata[2838]: Jun 20 19:30:54.083 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jun 20 19:30:54.083894 coreos-metadata[2838]: Jun 20 19:30:54.083 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jun 20 19:30:54.089130 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:30:54.124662 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:30:54.133142 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:30:54.141139 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 19:30:54.146607 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:30:54.407867 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jun 20 19:30:54.424571 systemd-networkd[2708]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jun 20 19:30:54.424853 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Jun 20 19:30:54.425669 systemd-networkd[2708]: enP1p1s0f0np0: Link UP Jun 20 19:30:54.425914 systemd-networkd[2708]: enP1p1s0f0np0: Gained carrier Jun 20 19:30:54.426656 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:30:54.444854 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jun 20 19:30:54.451186 systemd-networkd[2708]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:06:d8.network. Jun 20 19:30:54.451467 systemd-networkd[2708]: enP1p1s0f1np1: Link UP Jun 20 19:30:54.451662 systemd-networkd[2708]: enP1p1s0f1np1: Gained carrier Jun 20 19:30:54.458139 systemd-networkd[2708]: bond0: Link UP Jun 20 19:30:54.458421 systemd-networkd[2708]: bond0: Gained carrier Jun 20 19:30:54.458599 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:54.459185 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:54.459445 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:54.459580 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:54.549061 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jun 20 19:30:54.549095 kernel: bond0: active interface up! Jun 20 19:30:54.672858 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Jun 20 19:30:55.864101 coreos-metadata[2752]: Jun 20 19:30:55.864 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jun 20 19:30:56.084008 coreos-metadata[2838]: Jun 20 19:30:56.083 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jun 20 19:30:56.361902 systemd-networkd[2708]: bond0: Gained IPv6LL Jun 20 19:30:56.362439 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:56.490183 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:56.490297 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:30:56.492073 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:30:56.497853 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:30:56.505081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:30:56.527341 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:30:56.549568 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:30:57.193224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:30:57.199315 (kubelet)[2916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:30:57.465862 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jun 20 19:30:57.466065 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jun 20 19:30:57.599791 kubelet[2916]: E0620 19:30:57.599758 2916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:30:57.602415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:30:57.602548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:30:57.602890 systemd[1]: kubelet.service: Consumed 725ms CPU time, 266.1M memory peak. Jun 20 19:30:58.273774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:30:58.279899 systemd[1]: Started sshd@0-147.28.145.50:22-147.75.109.163:58984.service - OpenSSH per-connection server daemon (147.75.109.163:58984). Jun 20 19:30:58.438224 coreos-metadata[2838]: Jun 20 19:30:58.438 INFO Fetch successful Jun 20 19:30:58.487633 unknown[2838]: wrote ssh authorized keys file for user: core Jun 20 19:30:58.496876 coreos-metadata[2752]: Jun 20 19:30:58.496 INFO Fetch successful Jun 20 19:30:58.515869 update-ssh-keys[2945]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:30:58.517528 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:30:58.524128 systemd[1]: Finished sshkeys.service. Jun 20 19:30:58.571644 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:30:58.578338 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jun 20 19:30:58.712720 sshd[2941]: Accepted publickey for core from 147.75.109.163 port 58984 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:30:58.714646 sshd-session[2941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:58.720164 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:30:58.725987 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:30:58.735910 systemd-logind[2783]: New session 1 of user core. Jun 20 19:30:58.755199 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:30:58.762542 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:30:58.789766 (systemd)[2957]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:30:58.791717 systemd-logind[2783]: New session c1 of user core. Jun 20 19:30:58.859472 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jun 20 19:30:58.864652 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:30:58.910551 systemd[2957]: Queued start job for default target default.target. Jun 20 19:30:58.930013 systemd[2957]: Created slice app.slice - User Application Slice. Jun 20 19:30:58.930037 systemd[2957]: Reached target paths.target - Paths. Jun 20 19:30:58.930069 systemd[2957]: Reached target timers.target - Timers. Jun 20 19:30:58.931246 systemd[2957]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:30:58.939254 systemd[2957]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:30:58.939302 systemd[2957]: Reached target sockets.target - Sockets. Jun 20 19:30:58.939343 systemd[2957]: Reached target basic.target - Basic System. Jun 20 19:30:58.939372 systemd[2957]: Reached target default.target - Main User Target. Jun 20 19:30:58.939394 systemd[2957]: Startup finished in 143ms. Jun 20 19:30:58.939555 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:30:58.945545 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:30:58.950257 systemd[1]: Startup finished in 4.896s (kernel) + 19.768s (initrd) + 9.890s (userspace) = 34.555s. Jun 20 19:30:58.975676 login[2892]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:58.976564 login[2893]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:58.978774 systemd-logind[2783]: New session 3 of user core. Jun 20 19:30:58.980079 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:30:58.981964 systemd-logind[2783]: New session 2 of user core. Jun 20 19:30:58.983025 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:30:59.255553 systemd[1]: Started sshd@1-147.28.145.50:22-147.75.109.163:60594.service - OpenSSH per-connection server daemon (147.75.109.163:60594). Jun 20 19:30:59.662867 sshd[2999]: Accepted publickey for core from 147.75.109.163 port 60594 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:30:59.664071 sshd-session[2999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:30:59.667034 systemd-logind[2783]: New session 4 of user core. Jun 20 19:30:59.688019 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:30:59.958728 sshd[3001]: Connection closed by 147.75.109.163 port 60594 Jun 20 19:30:59.959036 sshd-session[2999]: pam_unix(sshd:session): session closed for user core Jun 20 19:30:59.961726 systemd[1]: sshd@1-147.28.145.50:22-147.75.109.163:60594.service: Deactivated successfully. Jun 20 19:30:59.964236 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:30:59.964773 systemd-logind[2783]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:30:59.965559 systemd-logind[2783]: Removed session 4. Jun 20 19:31:00.037397 systemd[1]: Started sshd@2-147.28.145.50:22-147.75.109.163:60598.service - OpenSSH per-connection server daemon (147.75.109.163:60598). Jun 20 19:31:00.440573 sshd[3007]: Accepted publickey for core from 147.75.109.163 port 60598 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:31:00.441721 sshd-session[3007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:00.444495 systemd-logind[2783]: New session 5 of user core. Jun 20 19:31:00.467018 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:31:00.729346 sshd[3009]: Connection closed by 147.75.109.163 port 60598 Jun 20 19:31:00.729645 sshd-session[3007]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:00.732388 systemd[1]: sshd@2-147.28.145.50:22-147.75.109.163:60598.service: Deactivated successfully. Jun 20 19:31:00.733765 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:31:00.734298 systemd-logind[2783]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:31:00.735084 systemd-logind[2783]: Removed session 5. Jun 20 19:31:00.809340 systemd[1]: Started sshd@3-147.28.145.50:22-147.75.109.163:60600.service - OpenSSH per-connection server daemon (147.75.109.163:60600). Jun 20 19:31:01.216121 sshd[3015]: Accepted publickey for core from 147.75.109.163 port 60600 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:31:01.217239 sshd-session[3015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:01.220156 systemd-logind[2783]: New session 6 of user core. Jun 20 19:31:01.241023 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:31:01.369474 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:31:01.511414 sshd[3017]: Connection closed by 147.75.109.163 port 60600 Jun 20 19:31:01.511907 sshd-session[3015]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:01.515554 systemd[1]: sshd@3-147.28.145.50:22-147.75.109.163:60600.service: Deactivated successfully. Jun 20 19:31:01.517222 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:31:01.517790 systemd-logind[2783]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:31:01.518615 systemd-logind[2783]: Removed session 6. Jun 20 19:31:01.594468 systemd[1]: Started sshd@4-147.28.145.50:22-147.75.109.163:60612.service - OpenSSH per-connection server daemon (147.75.109.163:60612). Jun 20 19:31:02.001403 sshd[3024]: Accepted publickey for core from 147.75.109.163 port 60612 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:31:02.002515 sshd-session[3024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:02.005497 systemd-logind[2783]: New session 7 of user core. Jun 20 19:31:02.026960 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:31:02.239100 sudo[3027]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:31:02.239347 sudo[3027]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:02.256419 sudo[3027]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:02.319049 sshd[3026]: Connection closed by 147.75.109.163 port 60612 Jun 20 19:31:02.319331 sshd-session[3024]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:02.321907 systemd[1]: sshd@4-147.28.145.50:22-147.75.109.163:60612.service: Deactivated successfully. Jun 20 19:31:02.323474 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:31:02.324664 systemd-logind[2783]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:31:02.325628 systemd-logind[2783]: Removed session 7. Jun 20 19:31:02.398594 systemd[1]: Started sshd@5-147.28.145.50:22-147.75.109.163:60618.service - OpenSSH per-connection server daemon (147.75.109.163:60618). Jun 20 19:31:02.804270 sshd[3033]: Accepted publickey for core from 147.75.109.163 port 60618 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:31:02.805489 sshd-session[3033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:02.808424 systemd-logind[2783]: New session 8 of user core. Jun 20 19:31:02.830017 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:31:03.035724 sudo[3037]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:31:03.035988 sudo[3037]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:03.038886 sudo[3037]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:03.043110 sudo[3036]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:31:03.043349 sudo[3036]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:03.050101 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:31:03.095266 augenrules[3059]: No rules Jun 20 19:31:03.096284 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:31:03.097892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:31:03.098650 sudo[3036]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:03.160663 sshd[3035]: Connection closed by 147.75.109.163 port 60618 Jun 20 19:31:03.161014 sshd-session[3033]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:03.163967 systemd[1]: sshd@5-147.28.145.50:22-147.75.109.163:60618.service: Deactivated successfully. Jun 20 19:31:03.167109 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:31:03.167726 systemd-logind[2783]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:31:03.168801 systemd-logind[2783]: Removed session 8. Jun 20 19:31:03.243433 systemd[1]: Started sshd@6-147.28.145.50:22-147.75.109.163:60626.service - OpenSSH per-connection server daemon (147.75.109.163:60626). Jun 20 19:31:03.645210 sshd[3068]: Accepted publickey for core from 147.75.109.163 port 60626 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:31:03.646331 sshd-session[3068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:03.649875 systemd-logind[2783]: New session 9 of user core. Jun 20 19:31:03.672017 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:31:03.876514 sudo[3073]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:31:03.876774 sudo[3073]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:04.169676 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:31:04.184220 (dockerd)[3101]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:31:04.388570 dockerd[3101]: time="2025-06-20T19:31:04.388520000Z" level=info msg="Starting up" Jun 20 19:31:04.389717 dockerd[3101]: time="2025-06-20T19:31:04.389694400Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:31:04.417766 dockerd[3101]: time="2025-06-20T19:31:04.417741400Z" level=info msg="Loading containers: start." Jun 20 19:31:04.429863 kernel: Initializing XFRM netlink socket Jun 20 19:31:04.594075 systemd-timesyncd[2710]: Network configuration changed, trying to establish connection. Jun 20 19:31:04.625256 systemd-networkd[2708]: docker0: Link UP Jun 20 19:31:04.631087 dockerd[3101]: time="2025-06-20T19:31:04.631055520Z" level=info msg="Loading containers: done." Jun 20 19:31:04.640196 dockerd[3101]: time="2025-06-20T19:31:04.640165760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:31:04.640262 dockerd[3101]: time="2025-06-20T19:31:04.640234520Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:31:04.640345 dockerd[3101]: time="2025-06-20T19:31:04.640334280Z" level=info msg="Initializing buildkit" Jun 20 19:31:04.655006 dockerd[3101]: time="2025-06-20T19:31:04.654976640Z" level=info msg="Completed buildkit initialization" Jun 20 19:31:04.660836 dockerd[3101]: time="2025-06-20T19:31:04.660803800Z" level=info msg="Daemon has completed initialization" Jun 20 19:31:04.660929 dockerd[3101]: time="2025-06-20T19:31:04.660883280Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:31:04.660979 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:31:05.106997 containerd[2800]: time="2025-06-20T19:31:05.106964760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:31:05.173203 systemd-timesyncd[2710]: Contacted time server [2604:2dc0:202:300::2459]:123 (2.flatcar.pool.ntp.org). Jun 20 19:31:05.173281 systemd-timesyncd[2710]: Initial clock synchronization to Fri 2025-06-20 19:31:04.784517 UTC. Jun 20 19:31:05.407626 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck671048495-merged.mount: Deactivated successfully. Jun 20 19:31:05.557699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711129596.mount: Deactivated successfully. Jun 20 19:31:06.135990 containerd[2800]: time="2025-06-20T19:31:06.135949480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:06.136294 containerd[2800]: time="2025-06-20T19:31:06.135975169Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jun 20 19:31:06.136957 containerd[2800]: time="2025-06-20T19:31:06.136939476Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:06.139071 containerd[2800]: time="2025-06-20T19:31:06.139050463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:06.139964 containerd[2800]: time="2025-06-20T19:31:06.139939529Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.03293498s" Jun 20 19:31:06.140035 containerd[2800]: time="2025-06-20T19:31:06.139973210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 20 19:31:06.141135 containerd[2800]: time="2025-06-20T19:31:06.141111898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:31:06.879533 containerd[2800]: time="2025-06-20T19:31:06.879498961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:06.879626 containerd[2800]: time="2025-06-20T19:31:06.879521187Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jun 20 19:31:06.880360 containerd[2800]: time="2025-06-20T19:31:06.880331474Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:06.882644 containerd[2800]: time="2025-06-20T19:31:06.882617983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:06.883572 containerd[2800]: time="2025-06-20T19:31:06.883548571Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 742.412201ms" Jun 20 19:31:06.883621 containerd[2800]: time="2025-06-20T19:31:06.883570530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 20 19:31:06.883989 containerd[2800]: time="2025-06-20T19:31:06.883968462Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:31:07.852822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:31:07.854238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:07.992477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:07.995808 (kubelet)[3438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:31:08.037490 kubelet[3438]: E0620 19:31:08.037460 3438 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:31:08.040699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:31:08.040828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:31:08.041159 systemd[1]: kubelet.service: Consumed 148ms CPU time, 114.8M memory peak. Jun 20 19:31:08.146060 containerd[2800]: time="2025-06-20T19:31:08.145988545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:08.146263 containerd[2800]: time="2025-06-20T19:31:08.146002602Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jun 20 19:31:08.146917 containerd[2800]: time="2025-06-20T19:31:08.146896873Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:08.149141 containerd[2800]: time="2025-06-20T19:31:08.149119032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:08.150055 containerd[2800]: time="2025-06-20T19:31:08.150032136Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.266031406s" Jun 20 19:31:08.150080 containerd[2800]: time="2025-06-20T19:31:08.150062060Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 20 19:31:08.150419 containerd[2800]: time="2025-06-20T19:31:08.150397585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:31:08.911359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253495915.mount: Deactivated successfully. Jun 20 19:31:09.511301 containerd[2800]: time="2025-06-20T19:31:09.511257847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:09.511613 containerd[2800]: time="2025-06-20T19:31:09.511298984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jun 20 19:31:09.511908 containerd[2800]: time="2025-06-20T19:31:09.511889644Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:09.513284 containerd[2800]: time="2025-06-20T19:31:09.513263607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:09.513853 containerd[2800]: time="2025-06-20T19:31:09.513823115Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.363393593s" Jun 20 19:31:09.513904 containerd[2800]: time="2025-06-20T19:31:09.513859183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 20 19:31:09.514453 containerd[2800]: time="2025-06-20T19:31:09.514269622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:31:09.732091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493978607.mount: Deactivated successfully. Jun 20 19:31:10.932406 containerd[2800]: time="2025-06-20T19:31:10.932364600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:10.932837 containerd[2800]: time="2025-06-20T19:31:10.932356245Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jun 20 19:31:10.933321 containerd[2800]: time="2025-06-20T19:31:10.933299607Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:10.935645 containerd[2800]: time="2025-06-20T19:31:10.935623407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:10.936657 containerd[2800]: time="2025-06-20T19:31:10.936616474Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.422310488s" Jun 20 19:31:10.936686 containerd[2800]: time="2025-06-20T19:31:10.936669286Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 20 19:31:10.937073 containerd[2800]: time="2025-06-20T19:31:10.937053704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:31:11.232389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141267059.mount: Deactivated successfully. Jun 20 19:31:11.232813 containerd[2800]: time="2025-06-20T19:31:11.232734000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:31:11.232813 containerd[2800]: time="2025-06-20T19:31:11.232800424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 19:31:11.233458 containerd[2800]: time="2025-06-20T19:31:11.233408493Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:31:11.235041 containerd[2800]: time="2025-06-20T19:31:11.235013772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:31:11.235655 containerd[2800]: time="2025-06-20T19:31:11.235625157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 298.545576ms" Jun 20 19:31:11.235760 containerd[2800]: time="2025-06-20T19:31:11.235742987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 19:31:11.236072 containerd[2800]: time="2025-06-20T19:31:11.236053457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:31:11.544876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678770067.mount: Deactivated successfully. Jun 20 19:31:14.329137 containerd[2800]: time="2025-06-20T19:31:14.329068905Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jun 20 19:31:14.329137 containerd[2800]: time="2025-06-20T19:31:14.329089200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:14.330037 containerd[2800]: time="2025-06-20T19:31:14.330012258Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:14.332642 containerd[2800]: time="2025-06-20T19:31:14.332588034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:14.333869 containerd[2800]: time="2025-06-20T19:31:14.333768170Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.097620061s" Jun 20 19:31:14.333869 containerd[2800]: time="2025-06-20T19:31:14.333805929Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 20 19:31:18.291164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:31:18.292628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:18.432448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:18.435765 (kubelet)[3670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:31:18.466487 kubelet[3670]: E0620 19:31:18.466459 3670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:31:18.469062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:31:18.469189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:31:18.469506 systemd[1]: kubelet.service: Consumed 138ms CPU time, 116.4M memory peak. Jun 20 19:31:20.089004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:20.089138 systemd[1]: kubelet.service: Consumed 138ms CPU time, 116.4M memory peak. Jun 20 19:31:20.091129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:20.111515 systemd[1]: Reload requested from client PID 3698 ('systemctl') (unit session-9.scope)... Jun 20 19:31:20.111525 systemd[1]: Reloading... Jun 20 19:31:20.168860 zram_generator::config[3745]: No configuration found. Jun 20 19:31:20.247284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:31:20.360310 systemd[1]: Reloading finished in 248 ms. Jun 20 19:31:20.410754 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:31:20.410830 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:31:20.411114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:20.412841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:20.545720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:20.549175 (kubelet)[3805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:31:20.578854 kubelet[3805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:31:20.578998 kubelet[3805]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:31:20.578998 kubelet[3805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:31:20.578998 kubelet[3805]: I0620 19:31:20.578929 3805 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:31:21.751013 kubelet[3805]: I0620 19:31:21.750983 3805 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:31:21.751013 kubelet[3805]: I0620 19:31:21.751009 3805 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:31:21.751279 kubelet[3805]: I0620 19:31:21.751200 3805 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:31:21.781419 kubelet[3805]: E0620 19:31:21.781394 3805 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://147.28.145.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.145.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:31:21.784091 kubelet[3805]: I0620 19:31:21.784071 3805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:31:21.789376 kubelet[3805]: I0620 19:31:21.789355 3805 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:31:21.809799 kubelet[3805]: I0620 19:31:21.809782 3805 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:31:21.810784 kubelet[3805]: I0620 19:31:21.810744 3805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:31:21.810936 kubelet[3805]: I0620 19:31:21.810785 3805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-403d322406","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:31:21.811015 kubelet[3805]: I0620 19:31:21.811004 3805 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:31:21.811015 kubelet[3805]: I0620 19:31:21.811013 3805 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:31:21.811258 kubelet[3805]: I0620 19:31:21.811244 3805 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:31:21.814196 kubelet[3805]: I0620 19:31:21.814172 3805 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:31:21.814226 kubelet[3805]: I0620 19:31:21.814199 3805 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:31:21.815824 kubelet[3805]: I0620 19:31:21.815808 3805 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:31:21.816961 kubelet[3805]: I0620 19:31:21.816946 3805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:31:21.817527 kubelet[3805]: E0620 19:31:21.817497 3805 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://147.28.145.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.145.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:31:21.817748 kubelet[3805]: E0620 19:31:21.817709 3805 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://147.28.145.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-403d322406&limit=500&resourceVersion=0\": dial tcp 147.28.145.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:31:21.817983 kubelet[3805]: I0620 19:31:21.817968 3805 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:31:21.818625 kubelet[3805]: I0620 19:31:21.818612 3805 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:31:21.818736 kubelet[3805]: W0620 19:31:21.818729 3805 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:31:21.820872 kubelet[3805]: I0620 19:31:21.820861 3805 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:31:21.820906 kubelet[3805]: I0620 19:31:21.820899 3805 server.go:1289] "Started kubelet" Jun 20 19:31:21.821067 kubelet[3805]: I0620 19:31:21.821013 3805 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:31:21.821091 kubelet[3805]: I0620 19:31:21.821037 3805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:31:21.821363 kubelet[3805]: I0620 19:31:21.821350 3805 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:31:21.824599 kubelet[3805]: I0620 19:31:21.824581 3805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:31:21.824624 kubelet[3805]: I0620 19:31:21.824588 3805 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:31:21.825522 kubelet[3805]: I0620 19:31:21.825511 3805 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:31:21.825557 kubelet[3805]: E0620 19:31:21.825542 3805 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-403d322406\" not found" Jun 20 19:31:21.825639 kubelet[3805]: I0620 19:31:21.825622 3805 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:31:21.825717 kubelet[3805]: I0620 19:31:21.825701 3805 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:31:21.825763 kubelet[3805]: I0620 19:31:21.825749 3805 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:31:21.825958 kubelet[3805]: I0620 19:31:21.825940 3805 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:31:21.825982 kubelet[3805]: E0620 19:31:21.825948 3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.145.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-403d322406?timeout=10s\": dial tcp 147.28.145.50:6443: connect: connection refused" interval="200ms" Jun 20 19:31:21.826054 kubelet[3805]: E0620 19:31:21.826035 3805 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://147.28.145.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.145.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:31:21.826080 kubelet[3805]: I0620 19:31:21.826042 3805 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:31:21.826269 kubelet[3805]: E0620 19:31:21.826254 3805 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:31:21.826764 kubelet[3805]: I0620 19:31:21.826750 3805 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:31:21.827403 kubelet[3805]: E0620 19:31:21.826412 3805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.145.50:6443/api/v1/namespaces/default/events\": dial tcp 147.28.145.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-403d322406.184ad71301c49fd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-403d322406,UID:ci-4344.1.0-a-403d322406,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-403d322406,},FirstTimestamp:2025-06-20 19:31:21.820872656 +0000 UTC m=+1.268653864,LastTimestamp:2025-06-20 19:31:21.820872656 +0000 UTC m=+1.268653864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-403d322406,}" Jun 20 19:31:21.838560 kubelet[3805]: I0620 19:31:21.838546 3805 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:31:21.838593 kubelet[3805]: I0620 19:31:21.838560 3805 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:31:21.838593 kubelet[3805]: I0620 19:31:21.838578 3805 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:31:21.839239 kubelet[3805]: I0620 19:31:21.839223 3805 policy_none.go:49] "None policy: Start" Jun 20 19:31:21.839268 kubelet[3805]: I0620 19:31:21.839243 3805 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:31:21.839268 kubelet[3805]: I0620 19:31:21.839254 3805 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:31:21.841068 kubelet[3805]: I0620 19:31:21.841043 3805 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:31:21.842031 kubelet[3805]: I0620 19:31:21.842019 3805 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:31:21.842056 kubelet[3805]: I0620 19:31:21.842036 3805 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:31:21.842056 kubelet[3805]: I0620 19:31:21.842053 3805 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:31:21.842099 kubelet[3805]: I0620 19:31:21.842059 3805 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:31:21.842119 kubelet[3805]: E0620 19:31:21.842096 3805 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:31:21.842866 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:31:21.843511 kubelet[3805]: E0620 19:31:21.842937 3805 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://147.28.145.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.145.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:31:21.869310 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:31:21.871961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:31:21.885530 kubelet[3805]: E0620 19:31:21.885483 3805 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:31:21.885967 kubelet[3805]: I0620 19:31:21.885683 3805 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:31:21.885967 kubelet[3805]: I0620 19:31:21.885695 3805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:31:21.885967 kubelet[3805]: I0620 19:31:21.885880 3805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:31:21.886425 kubelet[3805]: E0620 19:31:21.886401 3805 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:31:21.886472 kubelet[3805]: E0620 19:31:21.886454 3805 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-403d322406\" not found" Jun 20 19:31:21.949179 systemd[1]: Created slice kubepods-burstable-pod7cbedd4e6108cedc7d446183b893a6c2.slice - libcontainer container kubepods-burstable-pod7cbedd4e6108cedc7d446183b893a6c2.slice. Jun 20 19:31:21.973463 kubelet[3805]: E0620 19:31:21.973444 3805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:21.975107 systemd[1]: Created slice kubepods-burstable-pod6331c98804a05f6d465ffa8a8e34d02e.slice - libcontainer container kubepods-burstable-pod6331c98804a05f6d465ffa8a8e34d02e.slice. Jun 20 19:31:21.987775 kubelet[3805]: I0620 19:31:21.987759 3805 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:21.988160 kubelet[3805]: E0620 19:31:21.988132 3805 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.145.50:6443/api/v1/nodes\": dial tcp 147.28.145.50:6443: connect: connection refused" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:21.996749 kubelet[3805]: E0620 19:31:21.996737 3805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:21.999048 systemd[1]: Created slice kubepods-burstable-poda31cf569247e5061dc8e8f9da4c2a40d.slice - libcontainer container kubepods-burstable-poda31cf569247e5061dc8e8f9da4c2a40d.slice. Jun 20 19:31:22.000304 kubelet[3805]: E0620 19:31:22.000289 3805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:22.026724 kubelet[3805]: E0620 19:31:22.026651 3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.145.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-403d322406?timeout=10s\": dial tcp 147.28.145.50:6443: connect: connection refused" interval="400ms" Jun 20 19:31:22.127299 kubelet[3805]: I0620 19:31:22.127271 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127362 kubelet[3805]: I0620 19:31:22.127300 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127362 kubelet[3805]: I0620 19:31:22.127318 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cbedd4e6108cedc7d446183b893a6c2-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-403d322406\" (UID: \"7cbedd4e6108cedc7d446183b893a6c2\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127362 kubelet[3805]: I0620 19:31:22.127333 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127477 kubelet[3805]: I0620 19:31:22.127389 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127477 kubelet[3805]: I0620 19:31:22.127416 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127477 kubelet[3805]: I0620 19:31:22.127435 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a31cf569247e5061dc8e8f9da4c2a40d-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-403d322406\" (UID: \"a31cf569247e5061dc8e8f9da4c2a40d\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127477 kubelet[3805]: I0620 19:31:22.127451 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cbedd4e6108cedc7d446183b893a6c2-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-403d322406\" (UID: \"7cbedd4e6108cedc7d446183b893a6c2\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.127477 kubelet[3805]: I0620 19:31:22.127467 3805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cbedd4e6108cedc7d446183b893a6c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-403d322406\" (UID: \"7cbedd4e6108cedc7d446183b893a6c2\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:22.189485 kubelet[3805]: I0620 19:31:22.189460 3805 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:22.189746 kubelet[3805]: E0620 19:31:22.189722 3805 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.145.50:6443/api/v1/nodes\": dial tcp 147.28.145.50:6443: connect: connection refused" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:22.274628 containerd[2800]: time="2025-06-20T19:31:22.274584333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-403d322406,Uid:7cbedd4e6108cedc7d446183b893a6c2,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:22.284870 containerd[2800]: time="2025-06-20T19:31:22.284804364Z" level=info msg="connecting to shim f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac" address="unix:///run/containerd/s/58c926c3c7493786d5374564a7ad7986f410876e0b90d517414f41a12ff4ba9f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:22.297872 containerd[2800]: time="2025-06-20T19:31:22.297814190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-403d322406,Uid:6331c98804a05f6d465ffa8a8e34d02e,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:22.300958 containerd[2800]: time="2025-06-20T19:31:22.300902741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-403d322406,Uid:a31cf569247e5061dc8e8f9da4c2a40d,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:22.320120 containerd[2800]: time="2025-06-20T19:31:22.320085891Z" level=info msg="connecting to shim 16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0" address="unix:///run/containerd/s/bb6ba60d782d42a149912510bad166dc556f3858bdd0e4d5934a0139d064c5c9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:22.320252 containerd[2800]: time="2025-06-20T19:31:22.320227236Z" level=info msg="connecting to shim 31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823" address="unix:///run/containerd/s/e54c7b87c846fa90af2832e60d9e732001ea71338dde8303163153f5ba10b6f5" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:22.322976 systemd[1]: Started cri-containerd-f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac.scope - libcontainer container f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac. Jun 20 19:31:22.334540 systemd[1]: Started cri-containerd-16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0.scope - libcontainer container 16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0. Jun 20 19:31:22.335929 systemd[1]: Started cri-containerd-31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823.scope - libcontainer container 31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823. Jun 20 19:31:22.351520 containerd[2800]: time="2025-06-20T19:31:22.351486834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-403d322406,Uid:7cbedd4e6108cedc7d446183b893a6c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac\"" Jun 20 19:31:22.354323 containerd[2800]: time="2025-06-20T19:31:22.354299058Z" level=info msg="CreateContainer within sandbox \"f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:31:22.358586 containerd[2800]: time="2025-06-20T19:31:22.358553209Z" level=info msg="Container afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:22.359711 containerd[2800]: time="2025-06-20T19:31:22.359685361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-403d322406,Uid:a31cf569247e5061dc8e8f9da4c2a40d,Namespace:kube-system,Attempt:0,} returns sandbox id \"16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0\"" Jun 20 19:31:22.360327 containerd[2800]: time="2025-06-20T19:31:22.360300971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-403d322406,Uid:6331c98804a05f6d465ffa8a8e34d02e,Namespace:kube-system,Attempt:0,} returns sandbox id \"31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823\"" Jun 20 19:31:22.361676 containerd[2800]: time="2025-06-20T19:31:22.361654129Z" level=info msg="CreateContainer within sandbox \"16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:31:22.362129 containerd[2800]: time="2025-06-20T19:31:22.362108708Z" level=info msg="CreateContainer within sandbox \"f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814\"" Jun 20 19:31:22.362441 containerd[2800]: time="2025-06-20T19:31:22.362190954Z" level=info msg="CreateContainer within sandbox \"31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:31:22.362575 containerd[2800]: time="2025-06-20T19:31:22.362547259Z" level=info msg="StartContainer for \"afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814\"" Jun 20 19:31:22.363572 containerd[2800]: time="2025-06-20T19:31:22.363549202Z" level=info msg="connecting to shim afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814" address="unix:///run/containerd/s/58c926c3c7493786d5374564a7ad7986f410876e0b90d517414f41a12ff4ba9f" protocol=ttrpc version=3 Jun 20 19:31:22.365774 containerd[2800]: time="2025-06-20T19:31:22.365748760Z" level=info msg="Container 3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:22.366111 containerd[2800]: time="2025-06-20T19:31:22.366085975Z" level=info msg="Container 51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:22.368975 containerd[2800]: time="2025-06-20T19:31:22.368950419Z" level=info msg="CreateContainer within sandbox \"31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068\"" Jun 20 19:31:22.369196 containerd[2800]: time="2025-06-20T19:31:22.369055254Z" level=info msg="CreateContainer within sandbox \"16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38\"" Jun 20 19:31:22.369278 containerd[2800]: time="2025-06-20T19:31:22.369259079Z" level=info msg="StartContainer for \"3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38\"" Jun 20 19:31:22.369316 containerd[2800]: time="2025-06-20T19:31:22.369297895Z" level=info msg="StartContainer for \"51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068\"" Jun 20 19:31:22.370217 containerd[2800]: time="2025-06-20T19:31:22.370194287Z" level=info msg="connecting to shim 3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38" address="unix:///run/containerd/s/bb6ba60d782d42a149912510bad166dc556f3858bdd0e4d5934a0139d064c5c9" protocol=ttrpc version=3 Jun 20 19:31:22.370265 containerd[2800]: time="2025-06-20T19:31:22.370243960Z" level=info msg="connecting to shim 51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068" address="unix:///run/containerd/s/e54c7b87c846fa90af2832e60d9e732001ea71338dde8303163153f5ba10b6f5" protocol=ttrpc version=3 Jun 20 19:31:22.373694 systemd[1]: Started cri-containerd-afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814.scope - libcontainer container afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814. Jun 20 19:31:22.380737 systemd[1]: Started cri-containerd-3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38.scope - libcontainer container 3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38. Jun 20 19:31:22.381869 systemd[1]: Started cri-containerd-51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068.scope - libcontainer container 51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068. Jun 20 19:31:22.403679 containerd[2800]: time="2025-06-20T19:31:22.403648470Z" level=info msg="StartContainer for \"afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814\" returns successfully" Jun 20 19:31:22.407797 containerd[2800]: time="2025-06-20T19:31:22.407773963Z" level=info msg="StartContainer for \"3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38\" returns successfully" Jun 20 19:31:22.409220 containerd[2800]: time="2025-06-20T19:31:22.409195605Z" level=info msg="StartContainer for \"51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068\" returns successfully" Jun 20 19:31:22.427932 kubelet[3805]: E0620 19:31:22.427879 3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.145.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-403d322406?timeout=10s\": dial tcp 147.28.145.50:6443: connect: connection refused" interval="800ms" Jun 20 19:31:22.591952 kubelet[3805]: I0620 19:31:22.591823 3805 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:22.846910 kubelet[3805]: E0620 19:31:22.846704 3805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:22.847624 kubelet[3805]: E0620 19:31:22.847604 3805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:22.851010 kubelet[3805]: E0620 19:31:22.850989 3805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:23.513302 kubelet[3805]: E0620 19:31:23.513170 3805 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-403d322406\" not found" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:23.614480 kubelet[3805]: I0620 19:31:23.614450 3805 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:23.614480 kubelet[3805]: E0620 19:31:23.614482 3805 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.0-a-403d322406\": node \"ci-4344.1.0-a-403d322406\" not found" Jun 20 19:31:23.625968 kubelet[3805]: I0620 19:31:23.625947 3805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.629327 kubelet[3805]: E0620 19:31:23.629308 3805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-403d322406\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.629327 kubelet[3805]: I0620 19:31:23.629326 3805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.630628 kubelet[3805]: E0620 19:31:23.630606 3805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-403d322406\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.630628 kubelet[3805]: I0620 19:31:23.630623 3805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.631799 kubelet[3805]: E0620 19:31:23.631774 3805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-403d322406\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.818034 kubelet[3805]: I0620 19:31:23.817945 3805 apiserver.go:52] "Watching apiserver" Jun 20 19:31:23.825909 kubelet[3805]: I0620 19:31:23.825894 3805 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:31:23.849277 kubelet[3805]: I0620 19:31:23.849264 3805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.849494 kubelet[3805]: I0620 19:31:23.849370 3805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.850575 kubelet[3805]: E0620 19:31:23.850550 3805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-403d322406\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:23.850657 kubelet[3805]: E0620 19:31:23.850640 3805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-403d322406\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:24.850227 kubelet[3805]: I0620 19:31:24.850195 3805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:24.865041 kubelet[3805]: I0620 19:31:24.865017 3805 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:31:25.483483 systemd[1]: Reload requested from client PID 4210 ('systemctl') (unit session-9.scope)... Jun 20 19:31:25.483493 systemd[1]: Reloading... Jun 20 19:31:25.551863 zram_generator::config[4258]: No configuration found. Jun 20 19:31:25.631676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:31:25.758832 systemd[1]: Reloading finished in 275 ms. Jun 20 19:31:25.789367 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:25.820649 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:31:25.822892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:25.822942 systemd[1]: kubelet.service: Consumed 1.679s CPU time, 143.8M memory peak. Jun 20 19:31:25.824764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:25.949864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:25.953339 (kubelet)[4320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:31:25.982825 kubelet[4320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:31:25.982825 kubelet[4320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:31:25.982825 kubelet[4320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:31:25.983047 kubelet[4320]: I0620 19:31:25.982880 4320 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:31:25.988674 kubelet[4320]: I0620 19:31:25.988651 4320 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:31:25.988700 kubelet[4320]: I0620 19:31:25.988676 4320 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:31:25.988900 kubelet[4320]: I0620 19:31:25.988890 4320 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:31:25.990099 kubelet[4320]: I0620 19:31:25.990087 4320 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:31:25.992159 kubelet[4320]: I0620 19:31:25.992142 4320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:31:25.994885 kubelet[4320]: I0620 19:31:25.994862 4320 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:31:26.013721 kubelet[4320]: I0620 19:31:26.013666 4320 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:31:26.013875 kubelet[4320]: I0620 19:31:26.013855 4320 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:31:26.014020 kubelet[4320]: I0620 19:31:26.013876 4320 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-403d322406","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:31:26.014095 kubelet[4320]: I0620 19:31:26.014030 4320 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:31:26.014095 kubelet[4320]: I0620 19:31:26.014039 4320 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:31:26.014140 kubelet[4320]: I0620 19:31:26.014098 4320 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:31:26.014398 kubelet[4320]: I0620 19:31:26.014389 4320 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:31:26.014419 kubelet[4320]: I0620 19:31:26.014406 4320 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:31:26.014437 kubelet[4320]: I0620 19:31:26.014430 4320 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:31:26.014462 kubelet[4320]: I0620 19:31:26.014443 4320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:31:26.015201 kubelet[4320]: I0620 19:31:26.015188 4320 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:31:26.015747 kubelet[4320]: I0620 19:31:26.015735 4320 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:31:26.017415 kubelet[4320]: I0620 19:31:26.017405 4320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:31:26.017451 kubelet[4320]: I0620 19:31:26.017445 4320 server.go:1289] "Started kubelet" Jun 20 19:31:26.020108 kubelet[4320]: I0620 19:31:26.019580 4320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:31:26.020146 kubelet[4320]: I0620 19:31:26.017489 4320 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:31:26.020844 kubelet[4320]: I0620 19:31:26.020823 4320 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:31:26.021735 kubelet[4320]: I0620 19:31:26.021721 4320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:31:26.021759 kubelet[4320]: I0620 19:31:26.021734 4320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:31:26.021783 kubelet[4320]: I0620 19:31:26.021764 4320 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:31:26.021783 kubelet[4320]: E0620 19:31:26.021772 4320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-403d322406\" not found" Jun 20 19:31:26.021834 kubelet[4320]: I0620 19:31:26.021820 4320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:31:26.021934 kubelet[4320]: E0620 19:31:26.021919 4320 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:31:26.021988 kubelet[4320]: I0620 19:31:26.021947 4320 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:31:26.022181 kubelet[4320]: I0620 19:31:26.022167 4320 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:31:26.022279 kubelet[4320]: I0620 19:31:26.022262 4320 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:31:26.022736 kubelet[4320]: I0620 19:31:26.022721 4320 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:31:26.023022 kubelet[4320]: I0620 19:31:26.023006 4320 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:31:26.026850 kubelet[4320]: I0620 19:31:26.026818 4320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:31:26.032608 kubelet[4320]: I0620 19:31:26.032586 4320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:31:26.032608 kubelet[4320]: I0620 19:31:26.032611 4320 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:31:26.032691 kubelet[4320]: I0620 19:31:26.032626 4320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:31:26.032691 kubelet[4320]: I0620 19:31:26.032634 4320 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:31:26.032691 kubelet[4320]: E0620 19:31:26.032674 4320 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:31:26.052410 kubelet[4320]: I0620 19:31:26.052390 4320 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:31:26.052435 kubelet[4320]: I0620 19:31:26.052410 4320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:31:26.052435 kubelet[4320]: I0620 19:31:26.052429 4320 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:31:26.052573 kubelet[4320]: I0620 19:31:26.052558 4320 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:31:26.052604 kubelet[4320]: I0620 19:31:26.052570 4320 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:31:26.052604 kubelet[4320]: I0620 19:31:26.052588 4320 policy_none.go:49] "None policy: Start" Jun 20 19:31:26.052604 kubelet[4320]: I0620 19:31:26.052596 4320 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:31:26.052604 kubelet[4320]: I0620 19:31:26.052604 4320 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:31:26.052690 kubelet[4320]: I0620 19:31:26.052683 4320 state_mem.go:75] "Updated machine memory state" Jun 20 19:31:26.055768 kubelet[4320]: E0620 19:31:26.055752 4320 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:31:26.055941 kubelet[4320]: I0620 19:31:26.055930 4320 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:31:26.055969 kubelet[4320]: I0620 19:31:26.055943 4320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:31:26.056105 kubelet[4320]: I0620 19:31:26.056091 4320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:31:26.056550 kubelet[4320]: E0620 19:31:26.056533 4320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:31:26.133836 kubelet[4320]: I0620 19:31:26.133810 4320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.133916 kubelet[4320]: I0620 19:31:26.133874 4320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.133916 kubelet[4320]: I0620 19:31:26.133909 4320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.136603 kubelet[4320]: I0620 19:31:26.136587 4320 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:31:26.136685 kubelet[4320]: I0620 19:31:26.136668 4320 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:31:26.137044 kubelet[4320]: I0620 19:31:26.137026 4320 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:31:26.137079 kubelet[4320]: E0620 19:31:26.137065 4320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-403d322406\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.158018 kubelet[4320]: I0620 19:31:26.158000 4320 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:26.161717 kubelet[4320]: I0620 19:31:26.161699 4320 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:26.161779 kubelet[4320]: I0620 19:31:26.161768 4320 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-403d322406" Jun 20 19:31:26.322893 kubelet[4320]: I0620 19:31:26.322821 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.322955 kubelet[4320]: I0620 19:31:26.322904 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.322955 kubelet[4320]: I0620 19:31:26.322930 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.323032 kubelet[4320]: I0620 19:31:26.323016 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.323075 kubelet[4320]: I0620 19:31:26.323043 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6331c98804a05f6d465ffa8a8e34d02e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-403d322406\" (UID: \"6331c98804a05f6d465ffa8a8e34d02e\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.323075 kubelet[4320]: I0620 19:31:26.323061 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cbedd4e6108cedc7d446183b893a6c2-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-403d322406\" (UID: \"7cbedd4e6108cedc7d446183b893a6c2\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.323160 kubelet[4320]: I0620 19:31:26.323079 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cbedd4e6108cedc7d446183b893a6c2-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-403d322406\" (UID: \"7cbedd4e6108cedc7d446183b893a6c2\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.323160 kubelet[4320]: I0620 19:31:26.323100 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cbedd4e6108cedc7d446183b893a6c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-403d322406\" (UID: \"7cbedd4e6108cedc7d446183b893a6c2\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" Jun 20 19:31:26.323160 kubelet[4320]: I0620 19:31:26.323117 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a31cf569247e5061dc8e8f9da4c2a40d-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-403d322406\" (UID: \"a31cf569247e5061dc8e8f9da4c2a40d\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:27.015542 kubelet[4320]: I0620 19:31:27.015519 4320 apiserver.go:52] "Watching apiserver" Jun 20 19:31:27.022702 kubelet[4320]: I0620 19:31:27.022675 4320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:31:27.038825 kubelet[4320]: I0620 19:31:27.038809 4320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:27.041627 kubelet[4320]: I0620 19:31:27.041614 4320 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:31:27.041678 kubelet[4320]: E0620 19:31:27.041666 4320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-403d322406\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" Jun 20 19:31:27.052827 kubelet[4320]: I0620 19:31:27.052788 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-403d322406" podStartSLOduration=1.052776028 podStartE2EDuration="1.052776028s" podCreationTimestamp="2025-06-20 19:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:31:27.052725257 +0000 UTC m=+1.096220531" watchObservedRunningTime="2025-06-20 19:31:27.052776028 +0000 UTC m=+1.096271302" Jun 20 19:31:27.058557 kubelet[4320]: I0620 19:31:27.058505 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-403d322406" podStartSLOduration=1.058487145 podStartE2EDuration="1.058487145s" podCreationTimestamp="2025-06-20 19:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:31:27.058410451 +0000 UTC m=+1.101905765" watchObservedRunningTime="2025-06-20 19:31:27.058487145 +0000 UTC m=+1.101982459" Jun 20 19:31:27.069345 kubelet[4320]: I0620 19:31:27.069310 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-403d322406" podStartSLOduration=3.06929858 podStartE2EDuration="3.06929858s" podCreationTimestamp="2025-06-20 19:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:31:27.063641304 +0000 UTC m=+1.107136618" watchObservedRunningTime="2025-06-20 19:31:27.06929858 +0000 UTC m=+1.112793894" Jun 20 19:31:30.747504 kubelet[4320]: I0620 19:31:30.747436 4320 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:31:30.748016 kubelet[4320]: I0620 19:31:30.747902 4320 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:31:30.748051 containerd[2800]: time="2025-06-20T19:31:30.747748916Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:31:31.928997 systemd[1]: Created slice kubepods-besteffort-podc972b193_2b00_4b85_b000_265d8d2e12f9.slice - libcontainer container kubepods-besteffort-podc972b193_2b00_4b85_b000_265d8d2e12f9.slice. Jun 20 19:31:31.959877 kubelet[4320]: I0620 19:31:31.959838 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c972b193-2b00-4b85-b000-265d8d2e12f9-kube-proxy\") pod \"kube-proxy-vvv2n\" (UID: \"c972b193-2b00-4b85-b000-265d8d2e12f9\") " pod="kube-system/kube-proxy-vvv2n" Jun 20 19:31:31.960188 kubelet[4320]: I0620 19:31:31.959911 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c972b193-2b00-4b85-b000-265d8d2e12f9-lib-modules\") pod \"kube-proxy-vvv2n\" (UID: \"c972b193-2b00-4b85-b000-265d8d2e12f9\") " pod="kube-system/kube-proxy-vvv2n" Jun 20 19:31:31.960188 kubelet[4320]: I0620 19:31:31.959950 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c972b193-2b00-4b85-b000-265d8d2e12f9-xtables-lock\") pod \"kube-proxy-vvv2n\" (UID: \"c972b193-2b00-4b85-b000-265d8d2e12f9\") " pod="kube-system/kube-proxy-vvv2n" Jun 20 19:31:31.960188 kubelet[4320]: I0620 19:31:31.959969 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfv8\" (UniqueName: \"kubernetes.io/projected/c972b193-2b00-4b85-b000-265d8d2e12f9-kube-api-access-hjfv8\") pod \"kube-proxy-vvv2n\" (UID: \"c972b193-2b00-4b85-b000-265d8d2e12f9\") " pod="kube-system/kube-proxy-vvv2n" Jun 20 19:31:31.984565 systemd[1]: Created slice kubepods-besteffort-pod82c4546e_39fe_4c60_9040_dea6525d0718.slice - libcontainer container kubepods-besteffort-pod82c4546e_39fe_4c60_9040_dea6525d0718.slice. Jun 20 19:31:32.060158 kubelet[4320]: I0620 19:31:32.060131 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/82c4546e-39fe-4c60-9040-dea6525d0718-var-lib-calico\") pod \"tigera-operator-68f7c7984d-g6dn6\" (UID: \"82c4546e-39fe-4c60-9040-dea6525d0718\") " pod="tigera-operator/tigera-operator-68f7c7984d-g6dn6" Jun 20 19:31:32.060241 kubelet[4320]: I0620 19:31:32.060166 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqc9\" (UniqueName: \"kubernetes.io/projected/82c4546e-39fe-4c60-9040-dea6525d0718-kube-api-access-2gqc9\") pod \"tigera-operator-68f7c7984d-g6dn6\" (UID: \"82c4546e-39fe-4c60-9040-dea6525d0718\") " pod="tigera-operator/tigera-operator-68f7c7984d-g6dn6" Jun 20 19:31:32.250805 containerd[2800]: time="2025-06-20T19:31:32.250764867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvv2n,Uid:c972b193-2b00-4b85-b000-265d8d2e12f9,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:32.258882 containerd[2800]: time="2025-06-20T19:31:32.258857004Z" level=info msg="connecting to shim e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f" address="unix:///run/containerd/s/df92375ace8aa1a6aa94e7b8f9e717fd2b4f2133953f66c43ada533fc5ed23d4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:32.282976 systemd[1]: Started cri-containerd-e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f.scope - libcontainer container e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f. Jun 20 19:31:32.287019 containerd[2800]: time="2025-06-20T19:31:32.286994383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-g6dn6,Uid:82c4546e-39fe-4c60-9040-dea6525d0718,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:31:32.295613 containerd[2800]: time="2025-06-20T19:31:32.295585367Z" level=info msg="connecting to shim f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665" address="unix:///run/containerd/s/9da39e466c14f65f15bb2c79500bf5c9d88f7fa212d19ae3b316625ec1ce62b8" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:32.300801 containerd[2800]: time="2025-06-20T19:31:32.300777564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvv2n,Uid:c972b193-2b00-4b85-b000-265d8d2e12f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f\"" Jun 20 19:31:32.303140 containerd[2800]: time="2025-06-20T19:31:32.303117510Z" level=info msg="CreateContainer within sandbox \"e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:31:32.307683 systemd[1]: Started cri-containerd-f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665.scope - libcontainer container f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665. Jun 20 19:31:32.309083 containerd[2800]: time="2025-06-20T19:31:32.309055261Z" level=info msg="Container 0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:32.312891 containerd[2800]: time="2025-06-20T19:31:32.312859754Z" level=info msg="CreateContainer within sandbox \"e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781\"" Jun 20 19:31:32.313263 containerd[2800]: time="2025-06-20T19:31:32.313244812Z" level=info msg="StartContainer for \"0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781\"" Jun 20 19:31:32.314614 containerd[2800]: time="2025-06-20T19:31:32.314590180Z" level=info msg="connecting to shim 0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781" address="unix:///run/containerd/s/df92375ace8aa1a6aa94e7b8f9e717fd2b4f2133953f66c43ada533fc5ed23d4" protocol=ttrpc version=3 Jun 20 19:31:32.324716 systemd[1]: Started cri-containerd-0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781.scope - libcontainer container 0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781. Jun 20 19:31:32.332157 containerd[2800]: time="2025-06-20T19:31:32.332123656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-g6dn6,Uid:82c4546e-39fe-4c60-9040-dea6525d0718,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665\"" Jun 20 19:31:32.333153 containerd[2800]: time="2025-06-20T19:31:32.333128059Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:31:32.352105 containerd[2800]: time="2025-06-20T19:31:32.352078196Z" level=info msg="StartContainer for \"0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781\" returns successfully" Jun 20 19:31:33.054654 kubelet[4320]: I0620 19:31:33.054567 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvv2n" podStartSLOduration=2.054551202 podStartE2EDuration="2.054551202s" podCreationTimestamp="2025-06-20 19:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:31:33.054455928 +0000 UTC m=+7.097951242" watchObservedRunningTime="2025-06-20 19:31:33.054551202 +0000 UTC m=+7.098046516" Jun 20 19:31:33.361161 containerd[2800]: time="2025-06-20T19:31:33.360956753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:33.361161 containerd[2800]: time="2025-06-20T19:31:33.361013638Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 20 19:31:33.361587 containerd[2800]: time="2025-06-20T19:31:33.361568625Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:33.363302 containerd[2800]: time="2025-06-20T19:31:33.363278885Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:33.363997 containerd[2800]: time="2025-06-20T19:31:33.363974485Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 1.030818108s" Jun 20 19:31:33.364031 containerd[2800]: time="2025-06-20T19:31:33.364002888Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 20 19:31:33.365913 containerd[2800]: time="2025-06-20T19:31:33.365891912Z" level=info msg="CreateContainer within sandbox \"f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:31:33.369531 containerd[2800]: time="2025-06-20T19:31:33.369507734Z" level=info msg="Container a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:33.372015 containerd[2800]: time="2025-06-20T19:31:33.371995007Z" level=info msg="CreateContainer within sandbox \"f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30\"" Jun 20 19:31:33.372294 containerd[2800]: time="2025-06-20T19:31:33.372274757Z" level=info msg="StartContainer for \"a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30\"" Jun 20 19:31:33.372468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643548312.mount: Deactivated successfully. Jun 20 19:31:33.372992 containerd[2800]: time="2025-06-20T19:31:33.372970518Z" level=info msg="connecting to shim a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30" address="unix:///run/containerd/s/9da39e466c14f65f15bb2c79500bf5c9d88f7fa212d19ae3b316625ec1ce62b8" protocol=ttrpc version=3 Jun 20 19:31:33.400022 systemd[1]: Started cri-containerd-a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30.scope - libcontainer container a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30. Jun 20 19:31:33.419973 containerd[2800]: time="2025-06-20T19:31:33.419950678Z" level=info msg="StartContainer for \"a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30\" returns successfully" Jun 20 19:31:34.054953 kubelet[4320]: I0620 19:31:34.054908 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-g6dn6" podStartSLOduration=2.023283855 podStartE2EDuration="3.054892993s" podCreationTimestamp="2025-06-20 19:31:31 +0000 UTC" firstStartedPulling="2025-06-20 19:31:32.332913743 +0000 UTC m=+6.376409057" lastFinishedPulling="2025-06-20 19:31:33.364522881 +0000 UTC m=+7.408018195" observedRunningTime="2025-06-20 19:31:34.054864905 +0000 UTC m=+8.098360219" watchObservedRunningTime="2025-06-20 19:31:34.054892993 +0000 UTC m=+8.098388307" Jun 20 19:31:38.119687 sudo[3073]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:38.181765 sshd[3070]: Connection closed by 147.75.109.163 port 60626 Jun 20 19:31:38.182141 sshd-session[3068]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:38.185233 systemd-logind[2783]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:31:38.185542 systemd[1]: sshd@6-147.28.145.50:22-147.75.109.163:60626.service: Deactivated successfully. Jun 20 19:31:38.187201 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:31:38.187400 systemd[1]: session-9.scope: Consumed 8.221s CPU time, 247.6M memory peak. Jun 20 19:31:38.189186 systemd-logind[2783]: Removed session 9. Jun 20 19:31:38.395436 update_engine[2794]: I20250620 19:31:38.395282 2794 update_attempter.cc:509] Updating boot flags... Jun 20 19:31:41.141471 systemd[1]: Created slice kubepods-besteffort-podac02a8c0_2bf5_4493_aede_a3f9d8f6ee01.slice - libcontainer container kubepods-besteffort-podac02a8c0_2bf5_4493_aede_a3f9d8f6ee01.slice. Jun 20 19:31:41.219895 kubelet[4320]: I0620 19:31:41.219864 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wng9q\" (UniqueName: \"kubernetes.io/projected/ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01-kube-api-access-wng9q\") pod \"calico-typha-996d554dd-vpz7k\" (UID: \"ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01\") " pod="calico-system/calico-typha-996d554dd-vpz7k" Jun 20 19:31:41.219895 kubelet[4320]: I0620 19:31:41.219900 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01-tigera-ca-bundle\") pod \"calico-typha-996d554dd-vpz7k\" (UID: \"ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01\") " pod="calico-system/calico-typha-996d554dd-vpz7k" Jun 20 19:31:41.220248 kubelet[4320]: I0620 19:31:41.219919 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01-typha-certs\") pod \"calico-typha-996d554dd-vpz7k\" (UID: \"ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01\") " pod="calico-system/calico-typha-996d554dd-vpz7k" Jun 20 19:31:41.344174 systemd[1]: Created slice kubepods-besteffort-podb2854dba_54c2_4387_a6cb_4a06da8bab1c.slice - libcontainer container kubepods-besteffort-podb2854dba_54c2_4387_a6cb_4a06da8bab1c.slice. Jun 20 19:31:41.421493 kubelet[4320]: I0620 19:31:41.421139 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-flexvol-driver-host\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421493 kubelet[4320]: I0620 19:31:41.421179 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-xtables-lock\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421493 kubelet[4320]: I0620 19:31:41.421198 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-policysync\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421493 kubelet[4320]: I0620 19:31:41.421214 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2854dba-54c2-4387-a6cb-4a06da8bab1c-tigera-ca-bundle\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421493 kubelet[4320]: I0620 19:31:41.421242 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-var-run-calico\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421647 kubelet[4320]: I0620 19:31:41.421310 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-cni-bin-dir\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421647 kubelet[4320]: I0620 19:31:41.421386 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-cni-net-dir\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421647 kubelet[4320]: I0620 19:31:41.421418 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-lib-modules\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421647 kubelet[4320]: I0620 19:31:41.421445 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-var-lib-calico\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421647 kubelet[4320]: I0620 19:31:41.421490 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b2854dba-54c2-4387-a6cb-4a06da8bab1c-cni-log-dir\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421748 kubelet[4320]: I0620 19:31:41.421516 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b2854dba-54c2-4387-a6cb-4a06da8bab1c-node-certs\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.421748 kubelet[4320]: I0620 19:31:41.421531 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44z7q\" (UniqueName: \"kubernetes.io/projected/b2854dba-54c2-4387-a6cb-4a06da8bab1c-kube-api-access-44z7q\") pod \"calico-node-zchgs\" (UID: \"b2854dba-54c2-4387-a6cb-4a06da8bab1c\") " pod="calico-system/calico-node-zchgs" Jun 20 19:31:41.445630 containerd[2800]: time="2025-06-20T19:31:41.445592182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-996d554dd-vpz7k,Uid:ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:41.454239 containerd[2800]: time="2025-06-20T19:31:41.454214257Z" level=info msg="connecting to shim 492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7" address="unix:///run/containerd/s/12159cc478ca0ba4a16e7aca29e97a480cc112ef0d5f6f17b4aa36e6f994e67d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:41.488033 systemd[1]: Started cri-containerd-492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7.scope - libcontainer container 492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7. Jun 20 19:31:41.513737 containerd[2800]: time="2025-06-20T19:31:41.513705047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-996d554dd-vpz7k,Uid:ac02a8c0-2bf5-4493-aede-a3f9d8f6ee01,Namespace:calico-system,Attempt:0,} returns sandbox id \"492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7\"" Jun 20 19:31:41.514649 containerd[2800]: time="2025-06-20T19:31:41.514629478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:31:41.523296 kubelet[4320]: E0620 19:31:41.523266 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.523296 kubelet[4320]: W0620 19:31:41.523289 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.523398 kubelet[4320]: E0620 19:31:41.523318 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.523512 kubelet[4320]: E0620 19:31:41.523501 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.523512 kubelet[4320]: W0620 19:31:41.523510 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.523561 kubelet[4320]: E0620 19:31:41.523519 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.523709 kubelet[4320]: E0620 19:31:41.523700 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.523730 kubelet[4320]: W0620 19:31:41.523708 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.523730 kubelet[4320]: E0620 19:31:41.523716 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.523948 kubelet[4320]: E0620 19:31:41.523937 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.523948 kubelet[4320]: W0620 19:31:41.523947 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.523989 kubelet[4320]: E0620 19:31:41.523955 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.524164 kubelet[4320]: E0620 19:31:41.524154 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.524187 kubelet[4320]: W0620 19:31:41.524164 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.524187 kubelet[4320]: E0620 19:31:41.524173 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.524360 kubelet[4320]: E0620 19:31:41.524351 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.524383 kubelet[4320]: W0620 19:31:41.524360 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.524383 kubelet[4320]: E0620 19:31:41.524368 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.524562 kubelet[4320]: E0620 19:31:41.524553 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.524585 kubelet[4320]: W0620 19:31:41.524562 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.524585 kubelet[4320]: E0620 19:31:41.524570 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.525108 kubelet[4320]: E0620 19:31:41.525093 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.525133 kubelet[4320]: W0620 19:31:41.525109 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.525133 kubelet[4320]: E0620 19:31:41.525123 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.539325 kubelet[4320]: E0620 19:31:41.539294 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw4bd" podUID="e3bdbe06-b50d-4210-852c-5930face1fa3" Jun 20 19:31:41.544868 kubelet[4320]: E0620 19:31:41.544838 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.544868 kubelet[4320]: W0620 19:31:41.544860 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.545006 kubelet[4320]: E0620 19:31:41.544874 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.614584 kubelet[4320]: E0620 19:31:41.614556 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.614584 kubelet[4320]: W0620 19:31:41.614576 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.614682 kubelet[4320]: E0620 19:31:41.614595 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.614770 kubelet[4320]: E0620 19:31:41.614758 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.614802 kubelet[4320]: W0620 19:31:41.614767 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.614823 kubelet[4320]: E0620 19:31:41.614804 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.614943 kubelet[4320]: E0620 19:31:41.614934 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.614969 kubelet[4320]: W0620 19:31:41.614942 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.614969 kubelet[4320]: E0620 19:31:41.614950 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.615200 kubelet[4320]: E0620 19:31:41.615188 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.615200 kubelet[4320]: W0620 19:31:41.615198 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.615244 kubelet[4320]: E0620 19:31:41.615206 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.615400 kubelet[4320]: E0620 19:31:41.615388 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.615400 kubelet[4320]: W0620 19:31:41.615397 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.615446 kubelet[4320]: E0620 19:31:41.615405 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.615621 kubelet[4320]: E0620 19:31:41.615613 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.615644 kubelet[4320]: W0620 19:31:41.615620 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.615644 kubelet[4320]: E0620 19:31:41.615628 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.615794 kubelet[4320]: E0620 19:31:41.615783 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.615794 kubelet[4320]: W0620 19:31:41.615791 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.615838 kubelet[4320]: E0620 19:31:41.615798 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.615929 kubelet[4320]: E0620 19:31:41.615920 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.615953 kubelet[4320]: W0620 19:31:41.615930 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.615953 kubelet[4320]: E0620 19:31:41.615938 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.616072 kubelet[4320]: E0620 19:31:41.616063 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.616097 kubelet[4320]: W0620 19:31:41.616072 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.616097 kubelet[4320]: E0620 19:31:41.616080 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.616220 kubelet[4320]: E0620 19:31:41.616211 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.616243 kubelet[4320]: W0620 19:31:41.616219 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.616243 kubelet[4320]: E0620 19:31:41.616227 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.616440 kubelet[4320]: E0620 19:31:41.616432 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.616463 kubelet[4320]: W0620 19:31:41.616440 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.616463 kubelet[4320]: E0620 19:31:41.616447 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.616609 kubelet[4320]: E0620 19:31:41.616601 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.616630 kubelet[4320]: W0620 19:31:41.616608 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.616630 kubelet[4320]: E0620 19:31:41.616615 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.616774 kubelet[4320]: E0620 19:31:41.616766 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.616795 kubelet[4320]: W0620 19:31:41.616773 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.616795 kubelet[4320]: E0620 19:31:41.616781 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.616939 kubelet[4320]: E0620 19:31:41.616930 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.616967 kubelet[4320]: W0620 19:31:41.616938 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.616967 kubelet[4320]: E0620 19:31:41.616946 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.617074 kubelet[4320]: E0620 19:31:41.617066 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.617095 kubelet[4320]: W0620 19:31:41.617074 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.617095 kubelet[4320]: E0620 19:31:41.617081 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.617229 kubelet[4320]: E0620 19:31:41.617221 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.617252 kubelet[4320]: W0620 19:31:41.617229 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.617252 kubelet[4320]: E0620 19:31:41.617237 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.617427 kubelet[4320]: E0620 19:31:41.617419 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.617448 kubelet[4320]: W0620 19:31:41.617427 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.617448 kubelet[4320]: E0620 19:31:41.617435 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.617648 kubelet[4320]: E0620 19:31:41.617640 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.617669 kubelet[4320]: W0620 19:31:41.617649 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.617669 kubelet[4320]: E0620 19:31:41.617656 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.617819 kubelet[4320]: E0620 19:31:41.617811 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.617842 kubelet[4320]: W0620 19:31:41.617819 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.617842 kubelet[4320]: E0620 19:31:41.617827 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.617965 kubelet[4320]: E0620 19:31:41.617955 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.617990 kubelet[4320]: W0620 19:31:41.617965 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.617990 kubelet[4320]: E0620 19:31:41.617972 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.624313 kubelet[4320]: E0620 19:31:41.624292 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.624313 kubelet[4320]: W0620 19:31:41.624310 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.624418 kubelet[4320]: E0620 19:31:41.624328 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.624418 kubelet[4320]: I0620 19:31:41.624354 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3bdbe06-b50d-4210-852c-5930face1fa3-kubelet-dir\") pod \"csi-node-driver-tw4bd\" (UID: \"e3bdbe06-b50d-4210-852c-5930face1fa3\") " pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:41.624827 kubelet[4320]: E0620 19:31:41.624517 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.624827 kubelet[4320]: W0620 19:31:41.624527 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.624827 kubelet[4320]: E0620 19:31:41.624536 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.624827 kubelet[4320]: I0620 19:31:41.624555 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e3bdbe06-b50d-4210-852c-5930face1fa3-registration-dir\") pod \"csi-node-driver-tw4bd\" (UID: \"e3bdbe06-b50d-4210-852c-5930face1fa3\") " pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:41.624827 kubelet[4320]: E0620 19:31:41.624790 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.624827 kubelet[4320]: W0620 19:31:41.624806 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.624827 kubelet[4320]: E0620 19:31:41.624819 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.624827 kubelet[4320]: E0620 19:31:41.624961 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.624827 kubelet[4320]: W0620 19:31:41.624971 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.624980 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.625166 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.625691 kubelet[4320]: W0620 19:31:41.625174 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.625182 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.625297 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.625691 kubelet[4320]: W0620 19:31:41.625304 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.625311 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.625484 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.625691 kubelet[4320]: W0620 19:31:41.625490 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.625691 kubelet[4320]: E0620 19:31:41.625498 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626231 kubelet[4320]: I0620 19:31:41.625518 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e3bdbe06-b50d-4210-852c-5930face1fa3-socket-dir\") pod \"csi-node-driver-tw4bd\" (UID: \"e3bdbe06-b50d-4210-852c-5930face1fa3\") " pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:41.626231 kubelet[4320]: E0620 19:31:41.625729 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626231 kubelet[4320]: W0620 19:31:41.625738 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626231 kubelet[4320]: E0620 19:31:41.625747 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626231 kubelet[4320]: I0620 19:31:41.625764 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e3bdbe06-b50d-4210-852c-5930face1fa3-varrun\") pod \"csi-node-driver-tw4bd\" (UID: \"e3bdbe06-b50d-4210-852c-5930face1fa3\") " pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:41.626231 kubelet[4320]: E0620 19:31:41.625955 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626231 kubelet[4320]: W0620 19:31:41.625970 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626231 kubelet[4320]: E0620 19:31:41.625983 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626231 kubelet[4320]: E0620 19:31:41.626151 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626484 kubelet[4320]: W0620 19:31:41.626160 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626484 kubelet[4320]: E0620 19:31:41.626168 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626484 kubelet[4320]: E0620 19:31:41.626408 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626484 kubelet[4320]: W0620 19:31:41.626418 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626484 kubelet[4320]: E0620 19:31:41.626429 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626484 kubelet[4320]: I0620 19:31:41.626459 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxdm\" (UniqueName: \"kubernetes.io/projected/e3bdbe06-b50d-4210-852c-5930face1fa3-kube-api-access-rfxdm\") pod \"csi-node-driver-tw4bd\" (UID: \"e3bdbe06-b50d-4210-852c-5930face1fa3\") " pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:41.626671 kubelet[4320]: E0620 19:31:41.626658 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626696 kubelet[4320]: W0620 19:31:41.626670 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626696 kubelet[4320]: E0620 19:31:41.626680 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626811 kubelet[4320]: E0620 19:31:41.626803 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626845 kubelet[4320]: W0620 19:31:41.626810 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626845 kubelet[4320]: E0620 19:31:41.626817 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626845 kubelet[4320]: E0620 19:31:41.626960 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626845 kubelet[4320]: W0620 19:31:41.626968 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626845 kubelet[4320]: E0620 19:31:41.626976 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.626845 kubelet[4320]: E0620 19:31:41.627088 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.626845 kubelet[4320]: W0620 19:31:41.627095 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.626845 kubelet[4320]: E0620 19:31:41.627103 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.646676 containerd[2800]: time="2025-06-20T19:31:41.646632391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zchgs,Uid:b2854dba-54c2-4387-a6cb-4a06da8bab1c,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:41.654926 containerd[2800]: time="2025-06-20T19:31:41.654897297Z" level=info msg="connecting to shim 6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671" address="unix:///run/containerd/s/1e489ccd65b183ae86a762215c2d5f47eb2adf82801e187f8790adacf25fd177" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:41.685974 systemd[1]: Started cri-containerd-6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671.scope - libcontainer container 6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671. Jun 20 19:31:41.703378 containerd[2800]: time="2025-06-20T19:31:41.703348888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zchgs,Uid:b2854dba-54c2-4387-a6cb-4a06da8bab1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\"" Jun 20 19:31:41.727248 kubelet[4320]: E0620 19:31:41.727228 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.727248 kubelet[4320]: W0620 19:31:41.727244 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.727375 kubelet[4320]: E0620 19:31:41.727259 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.727443 kubelet[4320]: E0620 19:31:41.727434 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.727443 kubelet[4320]: W0620 19:31:41.727442 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.727487 kubelet[4320]: E0620 19:31:41.727450 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.727710 kubelet[4320]: E0620 19:31:41.727694 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.727734 kubelet[4320]: W0620 19:31:41.727711 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.727734 kubelet[4320]: E0620 19:31:41.727724 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.727929 kubelet[4320]: E0620 19:31:41.727920 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.727955 kubelet[4320]: W0620 19:31:41.727928 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.727955 kubelet[4320]: E0620 19:31:41.727936 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.728136 kubelet[4320]: E0620 19:31:41.728128 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.728160 kubelet[4320]: W0620 19:31:41.728136 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.728160 kubelet[4320]: E0620 19:31:41.728143 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.728384 kubelet[4320]: E0620 19:31:41.728371 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.728406 kubelet[4320]: W0620 19:31:41.728385 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.728406 kubelet[4320]: E0620 19:31:41.728397 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.728591 kubelet[4320]: E0620 19:31:41.728581 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.728613 kubelet[4320]: W0620 19:31:41.728593 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.728613 kubelet[4320]: E0620 19:31:41.728602 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.728776 kubelet[4320]: E0620 19:31:41.728768 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.728799 kubelet[4320]: W0620 19:31:41.728776 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.728799 kubelet[4320]: E0620 19:31:41.728782 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.728979 kubelet[4320]: E0620 19:31:41.728972 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.729001 kubelet[4320]: W0620 19:31:41.728979 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.729001 kubelet[4320]: E0620 19:31:41.728987 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.729214 kubelet[4320]: E0620 19:31:41.729202 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.729235 kubelet[4320]: W0620 19:31:41.729214 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.729235 kubelet[4320]: E0620 19:31:41.729223 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.729415 kubelet[4320]: E0620 19:31:41.729406 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.729437 kubelet[4320]: W0620 19:31:41.729415 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.729437 kubelet[4320]: E0620 19:31:41.729422 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.729628 kubelet[4320]: E0620 19:31:41.729618 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.729650 kubelet[4320]: W0620 19:31:41.729629 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.729650 kubelet[4320]: E0620 19:31:41.729639 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.729822 kubelet[4320]: E0620 19:31:41.729815 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.729843 kubelet[4320]: W0620 19:31:41.729822 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.729843 kubelet[4320]: E0620 19:31:41.729830 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.730009 kubelet[4320]: E0620 19:31:41.730001 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.730009 kubelet[4320]: W0620 19:31:41.730008 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.730056 kubelet[4320]: E0620 19:31:41.730020 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.730251 kubelet[4320]: E0620 19:31:41.730243 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.730277 kubelet[4320]: W0620 19:31:41.730251 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.730277 kubelet[4320]: E0620 19:31:41.730258 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.730404 kubelet[4320]: E0620 19:31:41.730396 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.730427 kubelet[4320]: W0620 19:31:41.730404 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.730427 kubelet[4320]: E0620 19:31:41.730411 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.730622 kubelet[4320]: E0620 19:31:41.730614 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.730643 kubelet[4320]: W0620 19:31:41.730622 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.730643 kubelet[4320]: E0620 19:31:41.730628 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.730917 kubelet[4320]: E0620 19:31:41.730905 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.730939 kubelet[4320]: W0620 19:31:41.730918 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.730939 kubelet[4320]: E0620 19:31:41.730929 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.731074 kubelet[4320]: E0620 19:31:41.731066 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.731074 kubelet[4320]: W0620 19:31:41.731074 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.731116 kubelet[4320]: E0620 19:31:41.731081 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.731208 kubelet[4320]: E0620 19:31:41.731200 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.731229 kubelet[4320]: W0620 19:31:41.731207 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.731229 kubelet[4320]: E0620 19:31:41.731214 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.731434 kubelet[4320]: E0620 19:31:41.731427 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.731455 kubelet[4320]: W0620 19:31:41.731434 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.731455 kubelet[4320]: E0620 19:31:41.731442 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.732073 kubelet[4320]: E0620 19:31:41.731697 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.732073 kubelet[4320]: W0620 19:31:41.731710 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.732073 kubelet[4320]: E0620 19:31:41.731722 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.732073 kubelet[4320]: E0620 19:31:41.731869 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.732073 kubelet[4320]: W0620 19:31:41.731876 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.732073 kubelet[4320]: E0620 19:31:41.731884 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.732073 kubelet[4320]: E0620 19:31:41.732010 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.732073 kubelet[4320]: W0620 19:31:41.732018 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.732073 kubelet[4320]: E0620 19:31:41.732025 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.732316 kubelet[4320]: E0620 19:31:41.732197 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.732316 kubelet[4320]: W0620 19:31:41.732205 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.732316 kubelet[4320]: E0620 19:31:41.732213 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:41.740055 kubelet[4320]: E0620 19:31:41.740041 4320 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:31:41.740055 kubelet[4320]: W0620 19:31:41.740054 4320 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:31:41.740115 kubelet[4320]: E0620 19:31:41.740065 4320 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:31:42.558636 containerd[2800]: time="2025-06-20T19:31:42.558589689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:42.559049 containerd[2800]: time="2025-06-20T19:31:42.558618776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 20 19:31:42.559305 containerd[2800]: time="2025-06-20T19:31:42.559280052Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:42.560736 containerd[2800]: time="2025-06-20T19:31:42.560718713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:42.561307 containerd[2800]: time="2025-06-20T19:31:42.561286807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 1.046631843s" Jun 20 19:31:42.561344 containerd[2800]: time="2025-06-20T19:31:42.561313573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 20 19:31:42.561993 containerd[2800]: time="2025-06-20T19:31:42.561931599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:31:42.567157 containerd[2800]: time="2025-06-20T19:31:42.567135029Z" level=info msg="CreateContainer within sandbox \"492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:31:42.570807 containerd[2800]: time="2025-06-20T19:31:42.570779011Z" level=info msg="Container 72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:42.574085 containerd[2800]: time="2025-06-20T19:31:42.574059546Z" level=info msg="CreateContainer within sandbox \"492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57\"" Jun 20 19:31:42.574430 containerd[2800]: time="2025-06-20T19:31:42.574409429Z" level=info msg="StartContainer for \"72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57\"" Jun 20 19:31:42.575351 containerd[2800]: time="2025-06-20T19:31:42.575327486Z" level=info msg="connecting to shim 72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57" address="unix:///run/containerd/s/12159cc478ca0ba4a16e7aca29e97a480cc112ef0d5f6f17b4aa36e6f994e67d" protocol=ttrpc version=3 Jun 20 19:31:42.601036 systemd[1]: Started cri-containerd-72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57.scope - libcontainer container 72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57. Jun 20 19:31:42.629512 containerd[2800]: time="2025-06-20T19:31:42.629486128Z" level=info msg="StartContainer for \"72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57\" returns successfully" Jun 20 19:31:42.969047 containerd[2800]: time="2025-06-20T19:31:42.969012828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:42.969151 containerd[2800]: time="2025-06-20T19:31:42.969064080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 20 19:31:42.969675 containerd[2800]: time="2025-06-20T19:31:42.969656020Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:42.971192 containerd[2800]: time="2025-06-20T19:31:42.971169097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:42.971786 containerd[2800]: time="2025-06-20T19:31:42.971763598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 409.802912ms" Jun 20 19:31:42.971815 containerd[2800]: time="2025-06-20T19:31:42.971791285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 20 19:31:42.973698 containerd[2800]: time="2025-06-20T19:31:42.973677050Z" level=info msg="CreateContainer within sandbox \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:31:42.978060 containerd[2800]: time="2025-06-20T19:31:42.978025438Z" level=info msg="Container 77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:42.981998 containerd[2800]: time="2025-06-20T19:31:42.981960929Z" level=info msg="CreateContainer within sandbox \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\"" Jun 20 19:31:42.982352 containerd[2800]: time="2025-06-20T19:31:42.982328976Z" level=info msg="StartContainer for \"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\"" Jun 20 19:31:42.983609 containerd[2800]: time="2025-06-20T19:31:42.983587833Z" level=info msg="connecting to shim 77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5" address="unix:///run/containerd/s/1e489ccd65b183ae86a762215c2d5f47eb2adf82801e187f8790adacf25fd177" protocol=ttrpc version=3 Jun 20 19:31:43.019045 systemd[1]: Started cri-containerd-77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5.scope - libcontainer container 77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5. Jun 20 19:31:43.033014 kubelet[4320]: E0620 19:31:43.032979 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw4bd" podUID="e3bdbe06-b50d-4210-852c-5930face1fa3" Jun 20 19:31:43.046368 containerd[2800]: time="2025-06-20T19:31:43.046342702Z" level=info msg="StartContainer for \"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\" returns successfully" Jun 20 19:31:43.056537 systemd[1]: cri-containerd-77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5.scope: Deactivated successfully. Jun 20 19:31:43.058471 containerd[2800]: time="2025-06-20T19:31:43.058442368Z" level=info msg="received exit event container_id:\"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\" id:\"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\" pid:5378 exited_at:{seconds:1750447903 nanos:58194953}" Jun 20 19:31:43.058545 containerd[2800]: time="2025-06-20T19:31:43.058526867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\" id:\"77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5\" pid:5378 exited_at:{seconds:1750447903 nanos:58194953}" Jun 20 19:31:44.067667 kubelet[4320]: I0620 19:31:44.067643 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:31:44.068550 containerd[2800]: time="2025-06-20T19:31:44.068525930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:31:44.078662 kubelet[4320]: I0620 19:31:44.078622 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-996d554dd-vpz7k" podStartSLOduration=2.031242606 podStartE2EDuration="3.078608825s" podCreationTimestamp="2025-06-20 19:31:41 +0000 UTC" firstStartedPulling="2025-06-20 19:31:41.514466117 +0000 UTC m=+15.557961431" lastFinishedPulling="2025-06-20 19:31:42.561832336 +0000 UTC m=+16.605327650" observedRunningTime="2025-06-20 19:31:43.080347948 +0000 UTC m=+17.123843262" watchObservedRunningTime="2025-06-20 19:31:44.078608825 +0000 UTC m=+18.122104099" Jun 20 19:31:45.033169 kubelet[4320]: E0620 19:31:45.033130 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw4bd" podUID="e3bdbe06-b50d-4210-852c-5930face1fa3" Jun 20 19:31:45.304549 containerd[2800]: time="2025-06-20T19:31:45.304417922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:45.304549 containerd[2800]: time="2025-06-20T19:31:45.304459370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 20 19:31:45.305167 containerd[2800]: time="2025-06-20T19:31:45.305148029Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:45.306753 containerd[2800]: time="2025-06-20T19:31:45.306730746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:45.307386 containerd[2800]: time="2025-06-20T19:31:45.307360712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 1.238802576s" Jun 20 19:31:45.307437 containerd[2800]: time="2025-06-20T19:31:45.307390758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 20 19:31:45.309435 containerd[2800]: time="2025-06-20T19:31:45.309407763Z" level=info msg="CreateContainer within sandbox \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:31:45.313942 containerd[2800]: time="2025-06-20T19:31:45.313915307Z" level=info msg="Container 9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:45.318791 containerd[2800]: time="2025-06-20T19:31:45.318765480Z" level=info msg="CreateContainer within sandbox \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\"" Jun 20 19:31:45.319129 containerd[2800]: time="2025-06-20T19:31:45.319103028Z" level=info msg="StartContainer for \"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\"" Jun 20 19:31:45.320415 containerd[2800]: time="2025-06-20T19:31:45.320392406Z" level=info msg="connecting to shim 9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1" address="unix:///run/containerd/s/1e489ccd65b183ae86a762215c2d5f47eb2adf82801e187f8790adacf25fd177" protocol=ttrpc version=3 Jun 20 19:31:45.352964 systemd[1]: Started cri-containerd-9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1.scope - libcontainer container 9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1. Jun 20 19:31:45.381121 containerd[2800]: time="2025-06-20T19:31:45.381092942Z" level=info msg="StartContainer for \"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\" returns successfully" Jun 20 19:31:45.756118 containerd[2800]: time="2025-06-20T19:31:45.756086917Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:31:45.758111 systemd[1]: cri-containerd-9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1.scope: Deactivated successfully. Jun 20 19:31:45.758435 systemd[1]: cri-containerd-9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1.scope: Consumed 933ms CPU time, 197M memory peak, 165.8M written to disk. Jun 20 19:31:45.759422 containerd[2800]: time="2025-06-20T19:31:45.759395381Z" level=info msg="received exit event container_id:\"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\" id:\"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\" pid:5441 exited_at:{seconds:1750447905 nanos:759252352}" Jun 20 19:31:45.759537 containerd[2800]: time="2025-06-20T19:31:45.759508244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\" id:\"9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1\" pid:5441 exited_at:{seconds:1750447905 nanos:759252352}" Jun 20 19:31:45.761553 kubelet[4320]: I0620 19:31:45.761518 4320 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:31:45.776194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1-rootfs.mount: Deactivated successfully. Jun 20 19:31:45.781475 systemd[1]: Created slice kubepods-burstable-pod76b3a8a1_ea32_4d26_95c5_1d8d2b0dc549.slice - libcontainer container kubepods-burstable-pod76b3a8a1_ea32_4d26_95c5_1d8d2b0dc549.slice. Jun 20 19:31:45.785499 systemd[1]: Created slice kubepods-burstable-pod89829641_c7f4_45b6_a27e_6b93bdf68c55.slice - libcontainer container kubepods-burstable-pod89829641_c7f4_45b6_a27e_6b93bdf68c55.slice. Jun 20 19:31:45.850901 systemd[1]: Created slice kubepods-besteffort-pod4fdaa1f1_f07b_43b7_8b92_4c1a87677155.slice - libcontainer container kubepods-besteffort-pod4fdaa1f1_f07b_43b7_8b92_4c1a87677155.slice. Jun 20 19:31:45.858914 kubelet[4320]: I0620 19:31:45.858883 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9s2k\" (UniqueName: \"kubernetes.io/projected/76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549-kube-api-access-k9s2k\") pod \"coredns-674b8bbfcf-cpq5r\" (UID: \"76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549\") " pod="kube-system/coredns-674b8bbfcf-cpq5r" Jun 20 19:31:45.859026 kubelet[4320]: I0620 19:31:45.858921 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4fdaa1f1-f07b-43b7-8b92-4c1a87677155-goldmane-key-pair\") pod \"goldmane-5bd85449d4-9z2k6\" (UID: \"4fdaa1f1-f07b-43b7-8b92-4c1a87677155\") " pod="calico-system/goldmane-5bd85449d4-9z2k6" Jun 20 19:31:45.859026 kubelet[4320]: I0620 19:31:45.858937 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqc7b\" (UniqueName: \"kubernetes.io/projected/89829641-c7f4-45b6-a27e-6b93bdf68c55-kube-api-access-jqc7b\") pod \"coredns-674b8bbfcf-dw589\" (UID: \"89829641-c7f4-45b6-a27e-6b93bdf68c55\") " pod="kube-system/coredns-674b8bbfcf-dw589" Jun 20 19:31:45.859026 kubelet[4320]: I0620 19:31:45.858958 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fdaa1f1-f07b-43b7-8b92-4c1a87677155-config\") pod \"goldmane-5bd85449d4-9z2k6\" (UID: \"4fdaa1f1-f07b-43b7-8b92-4c1a87677155\") " pod="calico-system/goldmane-5bd85449d4-9z2k6" Jun 20 19:31:45.859026 kubelet[4320]: I0620 19:31:45.859012 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fdaa1f1-f07b-43b7-8b92-4c1a87677155-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-9z2k6\" (UID: \"4fdaa1f1-f07b-43b7-8b92-4c1a87677155\") " pod="calico-system/goldmane-5bd85449d4-9z2k6" Jun 20 19:31:45.859200 kubelet[4320]: I0620 19:31:45.859072 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6ct\" (UniqueName: \"kubernetes.io/projected/4fdaa1f1-f07b-43b7-8b92-4c1a87677155-kube-api-access-vq6ct\") pod \"goldmane-5bd85449d4-9z2k6\" (UID: \"4fdaa1f1-f07b-43b7-8b92-4c1a87677155\") " pod="calico-system/goldmane-5bd85449d4-9z2k6" Jun 20 19:31:45.859200 kubelet[4320]: I0620 19:31:45.859163 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549-config-volume\") pod \"coredns-674b8bbfcf-cpq5r\" (UID: \"76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549\") " pod="kube-system/coredns-674b8bbfcf-cpq5r" Jun 20 19:31:45.859200 kubelet[4320]: I0620 19:31:45.859187 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89829641-c7f4-45b6-a27e-6b93bdf68c55-config-volume\") pod \"coredns-674b8bbfcf-dw589\" (UID: \"89829641-c7f4-45b6-a27e-6b93bdf68c55\") " pod="kube-system/coredns-674b8bbfcf-dw589" Jun 20 19:31:45.865719 systemd[1]: Created slice kubepods-besteffort-pod56131e8f_0d6e_4ded_a253_1effc765ab02.slice - libcontainer container kubepods-besteffort-pod56131e8f_0d6e_4ded_a253_1effc765ab02.slice. Jun 20 19:31:45.903881 systemd[1]: Created slice kubepods-besteffort-pod87fee9d4_61a9_48c9_a930_d56092cce167.slice - libcontainer container kubepods-besteffort-pod87fee9d4_61a9_48c9_a930_d56092cce167.slice. Jun 20 19:31:45.908860 systemd[1]: Created slice kubepods-besteffort-pod6959dc23_1f17_4656_bbb0_c17e5889e419.slice - libcontainer container kubepods-besteffort-pod6959dc23_1f17_4656_bbb0_c17e5889e419.slice. Jun 20 19:31:45.912860 systemd[1]: Created slice kubepods-besteffort-podfb8f50da_2df9_4446_a7af_2903440861ca.slice - libcontainer container kubepods-besteffort-podfb8f50da_2df9_4446_a7af_2903440861ca.slice. Jun 20 19:31:45.916711 systemd[1]: Created slice kubepods-besteffort-pod0515d912_1882_4a3f_805a_7b9c8d6f467b.slice - libcontainer container kubepods-besteffort-pod0515d912_1882_4a3f_805a_7b9c8d6f467b.slice. Jun 20 19:31:45.959982 kubelet[4320]: I0620 19:31:45.959945 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6slg\" (UniqueName: \"kubernetes.io/projected/6959dc23-1f17-4656-bbb0-c17e5889e419-kube-api-access-m6slg\") pod \"calico-apiserver-9b9958ddd-65rvm\" (UID: \"6959dc23-1f17-4656-bbb0-c17e5889e419\") " pod="calico-apiserver/calico-apiserver-9b9958ddd-65rvm" Jun 20 19:31:45.960050 kubelet[4320]: I0620 19:31:45.959983 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc4lp\" (UniqueName: \"kubernetes.io/projected/87fee9d4-61a9-48c9-a930-d56092cce167-kube-api-access-zc4lp\") pod \"calico-kube-controllers-f884559d4-mkbr4\" (UID: \"87fee9d4-61a9-48c9-a930-d56092cce167\") " pod="calico-system/calico-kube-controllers-f884559d4-mkbr4" Jun 20 19:31:45.960050 kubelet[4320]: I0620 19:31:45.960003 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbz4\" (UniqueName: \"kubernetes.io/projected/0515d912-1882-4a3f-805a-7b9c8d6f467b-kube-api-access-2vbz4\") pod \"whisker-6994686b5-zp82r\" (UID: \"0515d912-1882-4a3f-805a-7b9c8d6f467b\") " pod="calico-system/whisker-6994686b5-zp82r" Jun 20 19:31:45.960134 kubelet[4320]: I0620 19:31:45.960068 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87fee9d4-61a9-48c9-a930-d56092cce167-tigera-ca-bundle\") pod \"calico-kube-controllers-f884559d4-mkbr4\" (UID: \"87fee9d4-61a9-48c9-a930-d56092cce167\") " pod="calico-system/calico-kube-controllers-f884559d4-mkbr4" Jun 20 19:31:45.960202 kubelet[4320]: I0620 19:31:45.960163 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6959dc23-1f17-4656-bbb0-c17e5889e419-calico-apiserver-certs\") pod \"calico-apiserver-9b9958ddd-65rvm\" (UID: \"6959dc23-1f17-4656-bbb0-c17e5889e419\") " pod="calico-apiserver/calico-apiserver-9b9958ddd-65rvm" Jun 20 19:31:45.960258 kubelet[4320]: I0620 19:31:45.960219 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7tq\" (UniqueName: \"kubernetes.io/projected/fb8f50da-2df9-4446-a7af-2903440861ca-kube-api-access-xh7tq\") pod \"calico-apiserver-796757f947-zxknb\" (UID: \"fb8f50da-2df9-4446-a7af-2903440861ca\") " pod="calico-apiserver/calico-apiserver-796757f947-zxknb" Jun 20 19:31:45.960258 kubelet[4320]: I0620 19:31:45.960251 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-backend-key-pair\") pod \"whisker-6994686b5-zp82r\" (UID: \"0515d912-1882-4a3f-805a-7b9c8d6f467b\") " pod="calico-system/whisker-6994686b5-zp82r" Jun 20 19:31:45.960330 kubelet[4320]: I0620 19:31:45.960310 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56131e8f-0d6e-4ded-a253-1effc765ab02-calico-apiserver-certs\") pod \"calico-apiserver-796757f947-pwr97\" (UID: \"56131e8f-0d6e-4ded-a253-1effc765ab02\") " pod="calico-apiserver/calico-apiserver-796757f947-pwr97" Jun 20 19:31:45.960355 kubelet[4320]: I0620 19:31:45.960330 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb8f50da-2df9-4446-a7af-2903440861ca-calico-apiserver-certs\") pod \"calico-apiserver-796757f947-zxknb\" (UID: \"fb8f50da-2df9-4446-a7af-2903440861ca\") " pod="calico-apiserver/calico-apiserver-796757f947-zxknb" Jun 20 19:31:45.960387 kubelet[4320]: I0620 19:31:45.960376 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-ca-bundle\") pod \"whisker-6994686b5-zp82r\" (UID: \"0515d912-1882-4a3f-805a-7b9c8d6f467b\") " pod="calico-system/whisker-6994686b5-zp82r" Jun 20 19:31:45.960562 kubelet[4320]: I0620 19:31:45.960545 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvp5\" (UniqueName: \"kubernetes.io/projected/56131e8f-0d6e-4ded-a253-1effc765ab02-kube-api-access-2bvp5\") pod \"calico-apiserver-796757f947-pwr97\" (UID: \"56131e8f-0d6e-4ded-a253-1effc765ab02\") " pod="calico-apiserver/calico-apiserver-796757f947-pwr97" Jun 20 19:31:46.072807 containerd[2800]: time="2025-06-20T19:31:46.072723555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:31:46.084289 containerd[2800]: time="2025-06-20T19:31:46.084262389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpq5r,Uid:76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:46.087827 containerd[2800]: time="2025-06-20T19:31:46.087803782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dw589,Uid:89829641-c7f4-45b6-a27e-6b93bdf68c55,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:46.138953 containerd[2800]: time="2025-06-20T19:31:46.138909177Z" level=error msg="Failed to destroy network for sandbox \"887f95419389b07b1a0dd672c8936ab8f13f1485833b027417ad9239807a9dab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.139343 containerd[2800]: time="2025-06-20T19:31:46.139314054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dw589,Uid:89829641-c7f4-45b6-a27e-6b93bdf68c55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"887f95419389b07b1a0dd672c8936ab8f13f1485833b027417ad9239807a9dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.139511 kubelet[4320]: E0620 19:31:46.139469 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"887f95419389b07b1a0dd672c8936ab8f13f1485833b027417ad9239807a9dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.139556 kubelet[4320]: E0620 19:31:46.139542 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"887f95419389b07b1a0dd672c8936ab8f13f1485833b027417ad9239807a9dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dw589" Jun 20 19:31:46.139584 kubelet[4320]: E0620 19:31:46.139563 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"887f95419389b07b1a0dd672c8936ab8f13f1485833b027417ad9239807a9dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dw589" Jun 20 19:31:46.139667 kubelet[4320]: E0620 19:31:46.139644 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dw589_kube-system(89829641-c7f4-45b6-a27e-6b93bdf68c55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dw589_kube-system(89829641-c7f4-45b6-a27e-6b93bdf68c55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"887f95419389b07b1a0dd672c8936ab8f13f1485833b027417ad9239807a9dab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dw589" podUID="89829641-c7f4-45b6-a27e-6b93bdf68c55" Jun 20 19:31:46.140643 containerd[2800]: time="2025-06-20T19:31:46.140611181Z" level=error msg="Failed to destroy network for sandbox \"d5e117ea99dd0969d92cd0932e1d6fd0d395ceb5c30c8aab1bfaa11e45ddce40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.140954 containerd[2800]: time="2025-06-20T19:31:46.140930722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpq5r,Uid:76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e117ea99dd0969d92cd0932e1d6fd0d395ceb5c30c8aab1bfaa11e45ddce40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.141082 kubelet[4320]: E0620 19:31:46.141060 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e117ea99dd0969d92cd0932e1d6fd0d395ceb5c30c8aab1bfaa11e45ddce40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.141116 kubelet[4320]: E0620 19:31:46.141101 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e117ea99dd0969d92cd0932e1d6fd0d395ceb5c30c8aab1bfaa11e45ddce40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cpq5r" Jun 20 19:31:46.141142 kubelet[4320]: E0620 19:31:46.141121 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5e117ea99dd0969d92cd0932e1d6fd0d395ceb5c30c8aab1bfaa11e45ddce40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cpq5r" Jun 20 19:31:46.141183 kubelet[4320]: E0620 19:31:46.141166 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cpq5r_kube-system(76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cpq5r_kube-system(76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5e117ea99dd0969d92cd0932e1d6fd0d395ceb5c30c8aab1bfaa11e45ddce40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cpq5r" podUID="76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549" Jun 20 19:31:46.156230 containerd[2800]: time="2025-06-20T19:31:46.156208026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9z2k6,Uid:4fdaa1f1-f07b-43b7-8b92-4c1a87677155,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:46.167833 containerd[2800]: time="2025-06-20T19:31:46.167805711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-pwr97,Uid:56131e8f-0d6e-4ded-a253-1effc765ab02,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:31:46.195362 containerd[2800]: time="2025-06-20T19:31:46.195325862Z" level=error msg="Failed to destroy network for sandbox \"ab414fd393e8ce7411afbc100468846cb44f4db852cda3b321455a922d047839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.195712 containerd[2800]: time="2025-06-20T19:31:46.195686131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9z2k6,Uid:4fdaa1f1-f07b-43b7-8b92-4c1a87677155,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab414fd393e8ce7411afbc100468846cb44f4db852cda3b321455a922d047839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.195908 kubelet[4320]: E0620 19:31:46.195872 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab414fd393e8ce7411afbc100468846cb44f4db852cda3b321455a922d047839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.195950 kubelet[4320]: E0620 19:31:46.195928 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab414fd393e8ce7411afbc100468846cb44f4db852cda3b321455a922d047839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-9z2k6" Jun 20 19:31:46.195973 kubelet[4320]: E0620 19:31:46.195948 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab414fd393e8ce7411afbc100468846cb44f4db852cda3b321455a922d047839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-9z2k6" Jun 20 19:31:46.196016 kubelet[4320]: E0620 19:31:46.195994 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-9z2k6_calico-system(4fdaa1f1-f07b-43b7-8b92-4c1a87677155)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-9z2k6_calico-system(4fdaa1f1-f07b-43b7-8b92-4c1a87677155)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab414fd393e8ce7411afbc100468846cb44f4db852cda3b321455a922d047839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-9z2k6" podUID="4fdaa1f1-f07b-43b7-8b92-4c1a87677155" Jun 20 19:31:46.206439 containerd[2800]: time="2025-06-20T19:31:46.206415770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f884559d4-mkbr4,Uid:87fee9d4-61a9-48c9-a930-d56092cce167,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:46.207992 containerd[2800]: time="2025-06-20T19:31:46.207956463Z" level=error msg="Failed to destroy network for sandbox \"5403711027dce950d841bb519da07090ef90f0a72ef386af80a6d42338a5bebd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.211006 containerd[2800]: time="2025-06-20T19:31:46.210976997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b9958ddd-65rvm,Uid:6959dc23-1f17-4656-bbb0-c17e5889e419,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:31:46.215151 containerd[2800]: time="2025-06-20T19:31:46.215126906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-pwr97,Uid:56131e8f-0d6e-4ded-a253-1effc765ab02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5403711027dce950d841bb519da07090ef90f0a72ef386af80a6d42338a5bebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.215296 kubelet[4320]: E0620 19:31:46.215260 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5403711027dce950d841bb519da07090ef90f0a72ef386af80a6d42338a5bebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.215340 kubelet[4320]: E0620 19:31:46.215320 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5403711027dce950d841bb519da07090ef90f0a72ef386af80a6d42338a5bebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796757f947-pwr97" Jun 20 19:31:46.215363 kubelet[4320]: E0620 19:31:46.215350 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5403711027dce950d841bb519da07090ef90f0a72ef386af80a6d42338a5bebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796757f947-pwr97" Jun 20 19:31:46.215422 kubelet[4320]: E0620 19:31:46.215403 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-796757f947-pwr97_calico-apiserver(56131e8f-0d6e-4ded-a253-1effc765ab02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-796757f947-pwr97_calico-apiserver(56131e8f-0d6e-4ded-a253-1effc765ab02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5403711027dce950d841bb519da07090ef90f0a72ef386af80a6d42338a5bebd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796757f947-pwr97" podUID="56131e8f-0d6e-4ded-a253-1effc765ab02" Jun 20 19:31:46.215465 containerd[2800]: time="2025-06-20T19:31:46.215442886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-zxknb,Uid:fb8f50da-2df9-4446-a7af-2903440861ca,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:31:46.219595 containerd[2800]: time="2025-06-20T19:31:46.219573312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6994686b5-zp82r,Uid:0515d912-1882-4a3f-805a-7b9c8d6f467b,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:46.256561 containerd[2800]: time="2025-06-20T19:31:46.256246323Z" level=error msg="Failed to destroy network for sandbox \"8a8d31692fd68716ec6c2abd8e750e948c0046b2a3a44fc12e9411e847044c35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.256857 containerd[2800]: time="2025-06-20T19:31:46.256808470Z" level=error msg="Failed to destroy network for sandbox \"ffc3925e771d1d09b3a0dd4bf3c2927f009033d30bf1efd9fd0d19b49cd04361\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.257855 containerd[2800]: time="2025-06-20T19:31:46.257818142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b9958ddd-65rvm,Uid:6959dc23-1f17-4656-bbb0-c17e5889e419,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8d31692fd68716ec6c2abd8e750e948c0046b2a3a44fc12e9411e847044c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.258041 containerd[2800]: time="2025-06-20T19:31:46.258007298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f884559d4-mkbr4,Uid:87fee9d4-61a9-48c9-a930-d56092cce167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc3925e771d1d09b3a0dd4bf3c2927f009033d30bf1efd9fd0d19b49cd04361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.258107 kubelet[4320]: E0620 19:31:46.258020 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8d31692fd68716ec6c2abd8e750e948c0046b2a3a44fc12e9411e847044c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.258107 kubelet[4320]: E0620 19:31:46.258075 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8d31692fd68716ec6c2abd8e750e948c0046b2a3a44fc12e9411e847044c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b9958ddd-65rvm" Jun 20 19:31:46.258107 kubelet[4320]: E0620 19:31:46.258093 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8d31692fd68716ec6c2abd8e750e948c0046b2a3a44fc12e9411e847044c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b9958ddd-65rvm" Jun 20 19:31:46.258188 kubelet[4320]: E0620 19:31:46.258138 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b9958ddd-65rvm_calico-apiserver(6959dc23-1f17-4656-bbb0-c17e5889e419)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b9958ddd-65rvm_calico-apiserver(6959dc23-1f17-4656-bbb0-c17e5889e419)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a8d31692fd68716ec6c2abd8e750e948c0046b2a3a44fc12e9411e847044c35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b9958ddd-65rvm" podUID="6959dc23-1f17-4656-bbb0-c17e5889e419" Jun 20 19:31:46.258188 kubelet[4320]: E0620 19:31:46.258142 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc3925e771d1d09b3a0dd4bf3c2927f009033d30bf1efd9fd0d19b49cd04361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.258188 kubelet[4320]: E0620 19:31:46.258182 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc3925e771d1d09b3a0dd4bf3c2927f009033d30bf1efd9fd0d19b49cd04361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f884559d4-mkbr4" Jun 20 19:31:46.258271 kubelet[4320]: E0620 19:31:46.258194 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc3925e771d1d09b3a0dd4bf3c2927f009033d30bf1efd9fd0d19b49cd04361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f884559d4-mkbr4" Jun 20 19:31:46.258271 kubelet[4320]: E0620 19:31:46.258221 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f884559d4-mkbr4_calico-system(87fee9d4-61a9-48c9-a930-d56092cce167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f884559d4-mkbr4_calico-system(87fee9d4-61a9-48c9-a930-d56092cce167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffc3925e771d1d09b3a0dd4bf3c2927f009033d30bf1efd9fd0d19b49cd04361\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f884559d4-mkbr4" podUID="87fee9d4-61a9-48c9-a930-d56092cce167" Jun 20 19:31:46.258336 containerd[2800]: time="2025-06-20T19:31:46.258223099Z" level=error msg="Failed to destroy network for sandbox \"0605dbe711fe5e7b5d5d2727aaffba387bc26cf066991c550f578be361f77c3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.259928 containerd[2800]: time="2025-06-20T19:31:46.259888216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-zxknb,Uid:fb8f50da-2df9-4446-a7af-2903440861ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0605dbe711fe5e7b5d5d2727aaffba387bc26cf066991c550f578be361f77c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.260051 kubelet[4320]: E0620 19:31:46.260022 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0605dbe711fe5e7b5d5d2727aaffba387bc26cf066991c550f578be361f77c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.260083 kubelet[4320]: E0620 19:31:46.260064 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0605dbe711fe5e7b5d5d2727aaffba387bc26cf066991c550f578be361f77c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796757f947-zxknb" Jun 20 19:31:46.260108 kubelet[4320]: E0620 19:31:46.260083 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0605dbe711fe5e7b5d5d2727aaffba387bc26cf066991c550f578be361f77c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796757f947-zxknb" Jun 20 19:31:46.260153 kubelet[4320]: E0620 19:31:46.260134 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-796757f947-zxknb_calico-apiserver(fb8f50da-2df9-4446-a7af-2903440861ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-796757f947-zxknb_calico-apiserver(fb8f50da-2df9-4446-a7af-2903440861ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0605dbe711fe5e7b5d5d2727aaffba387bc26cf066991c550f578be361f77c3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796757f947-zxknb" podUID="fb8f50da-2df9-4446-a7af-2903440861ca" Jun 20 19:31:46.260307 containerd[2800]: time="2025-06-20T19:31:46.260280970Z" level=error msg="Failed to destroy network for sandbox \"a926f9ca4d6e1941c5c3c5573e8fe74c5f6c0ca76f5b88d3a5d7d97710cc3664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.261343 containerd[2800]: time="2025-06-20T19:31:46.261313567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6994686b5-zp82r,Uid:0515d912-1882-4a3f-805a-7b9c8d6f467b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a926f9ca4d6e1941c5c3c5573e8fe74c5f6c0ca76f5b88d3a5d7d97710cc3664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.261508 kubelet[4320]: E0620 19:31:46.261434 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a926f9ca4d6e1941c5c3c5573e8fe74c5f6c0ca76f5b88d3a5d7d97710cc3664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:46.261508 kubelet[4320]: E0620 19:31:46.261467 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a926f9ca4d6e1941c5c3c5573e8fe74c5f6c0ca76f5b88d3a5d7d97710cc3664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6994686b5-zp82r" Jun 20 19:31:46.261508 kubelet[4320]: E0620 19:31:46.261481 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a926f9ca4d6e1941c5c3c5573e8fe74c5f6c0ca76f5b88d3a5d7d97710cc3664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6994686b5-zp82r" Jun 20 19:31:46.261608 kubelet[4320]: E0620 19:31:46.261512 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6994686b5-zp82r_calico-system(0515d912-1882-4a3f-805a-7b9c8d6f467b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6994686b5-zp82r_calico-system(0515d912-1882-4a3f-805a-7b9c8d6f467b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a926f9ca4d6e1941c5c3c5573e8fe74c5f6c0ca76f5b88d3a5d7d97710cc3664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6994686b5-zp82r" podUID="0515d912-1882-4a3f-805a-7b9c8d6f467b" Jun 20 19:31:47.038577 systemd[1]: Created slice kubepods-besteffort-pode3bdbe06_b50d_4210_852c_5930face1fa3.slice - libcontainer container kubepods-besteffort-pode3bdbe06_b50d_4210_852c_5930face1fa3.slice. Jun 20 19:31:47.040289 containerd[2800]: time="2025-06-20T19:31:47.040258459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw4bd,Uid:e3bdbe06-b50d-4210-852c-5930face1fa3,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:47.082889 containerd[2800]: time="2025-06-20T19:31:47.082831614Z" level=error msg="Failed to destroy network for sandbox \"df97c87ff0b0fcb6f3fa0fda8fd04f3669fb5969c2270d8377cc0f42b51ae4dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:47.083242 containerd[2800]: time="2025-06-20T19:31:47.083211763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw4bd,Uid:e3bdbe06-b50d-4210-852c-5930face1fa3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df97c87ff0b0fcb6f3fa0fda8fd04f3669fb5969c2270d8377cc0f42b51ae4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:47.083385 kubelet[4320]: E0620 19:31:47.083354 4320 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df97c87ff0b0fcb6f3fa0fda8fd04f3669fb5969c2270d8377cc0f42b51ae4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:31:47.083607 kubelet[4320]: E0620 19:31:47.083398 4320 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df97c87ff0b0fcb6f3fa0fda8fd04f3669fb5969c2270d8377cc0f42b51ae4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:47.083607 kubelet[4320]: E0620 19:31:47.083418 4320 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df97c87ff0b0fcb6f3fa0fda8fd04f3669fb5969c2270d8377cc0f42b51ae4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tw4bd" Jun 20 19:31:47.083607 kubelet[4320]: E0620 19:31:47.083462 4320 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tw4bd_calico-system(e3bdbe06-b50d-4210-852c-5930face1fa3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tw4bd_calico-system(e3bdbe06-b50d-4210-852c-5930face1fa3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df97c87ff0b0fcb6f3fa0fda8fd04f3669fb5969c2270d8377cc0f42b51ae4dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tw4bd" podUID="e3bdbe06-b50d-4210-852c-5930face1fa3" Jun 20 19:31:47.084590 systemd[1]: run-netns-cni\x2d9c4e073c\x2d3e17\x2d951d\x2d3b71\x2d6fccfb2d4f78.mount: Deactivated successfully. Jun 20 19:31:48.569419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705216573.mount: Deactivated successfully. Jun 20 19:31:48.582637 containerd[2800]: time="2025-06-20T19:31:48.582605447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:48.582824 containerd[2800]: time="2025-06-20T19:31:48.582638093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 20 19:31:48.583262 containerd[2800]: time="2025-06-20T19:31:48.583243236Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:48.584549 containerd[2800]: time="2025-06-20T19:31:48.584528936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:48.585113 containerd[2800]: time="2025-06-20T19:31:48.585093913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 2.512335071s" Jun 20 19:31:48.585137 containerd[2800]: time="2025-06-20T19:31:48.585121197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 20 19:31:48.593778 containerd[2800]: time="2025-06-20T19:31:48.593753634Z" level=info msg="CreateContainer within sandbox \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:31:48.598687 containerd[2800]: time="2025-06-20T19:31:48.598662114Z" level=info msg="Container 72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:48.603910 containerd[2800]: time="2025-06-20T19:31:48.603876726Z" level=info msg="CreateContainer within sandbox \"6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\"" Jun 20 19:31:48.604257 containerd[2800]: time="2025-06-20T19:31:48.604234827Z" level=info msg="StartContainer for \"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\"" Jun 20 19:31:48.605601 containerd[2800]: time="2025-06-20T19:31:48.605576697Z" level=info msg="connecting to shim 72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d" address="unix:///run/containerd/s/1e489ccd65b183ae86a762215c2d5f47eb2adf82801e187f8790adacf25fd177" protocol=ttrpc version=3 Jun 20 19:31:48.641032 systemd[1]: Started cri-containerd-72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d.scope - libcontainer container 72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d. Jun 20 19:31:48.674786 containerd[2800]: time="2025-06-20T19:31:48.674760252Z" level=info msg="StartContainer for \"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" returns successfully" Jun 20 19:31:48.801977 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:31:48.802079 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:31:48.976673 kubelet[4320]: I0620 19:31:48.976637 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vbz4\" (UniqueName: \"kubernetes.io/projected/0515d912-1882-4a3f-805a-7b9c8d6f467b-kube-api-access-2vbz4\") pod \"0515d912-1882-4a3f-805a-7b9c8d6f467b\" (UID: \"0515d912-1882-4a3f-805a-7b9c8d6f467b\") " Jun 20 19:31:48.977144 kubelet[4320]: I0620 19:31:48.976686 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-ca-bundle\") pod \"0515d912-1882-4a3f-805a-7b9c8d6f467b\" (UID: \"0515d912-1882-4a3f-805a-7b9c8d6f467b\") " Jun 20 19:31:48.977144 kubelet[4320]: I0620 19:31:48.976720 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-backend-key-pair\") pod \"0515d912-1882-4a3f-805a-7b9c8d6f467b\" (UID: \"0515d912-1882-4a3f-805a-7b9c8d6f467b\") " Jun 20 19:31:48.977144 kubelet[4320]: I0620 19:31:48.977092 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0515d912-1882-4a3f-805a-7b9c8d6f467b" (UID: "0515d912-1882-4a3f-805a-7b9c8d6f467b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:31:48.978853 kubelet[4320]: I0620 19:31:48.978831 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0515d912-1882-4a3f-805a-7b9c8d6f467b-kube-api-access-2vbz4" (OuterVolumeSpecName: "kube-api-access-2vbz4") pod "0515d912-1882-4a3f-805a-7b9c8d6f467b" (UID: "0515d912-1882-4a3f-805a-7b9c8d6f467b"). InnerVolumeSpecName "kube-api-access-2vbz4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:31:48.978882 kubelet[4320]: I0620 19:31:48.978867 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0515d912-1882-4a3f-805a-7b9c8d6f467b" (UID: "0515d912-1882-4a3f-805a-7b9c8d6f467b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:31:49.076931 kubelet[4320]: I0620 19:31:49.076910 4320 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-backend-key-pair\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:31:49.076931 kubelet[4320]: I0620 19:31:49.076930 4320 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vbz4\" (UniqueName: \"kubernetes.io/projected/0515d912-1882-4a3f-805a-7b9c8d6f467b-kube-api-access-2vbz4\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:31:49.076992 kubelet[4320]: I0620 19:31:49.076940 4320 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0515d912-1882-4a3f-805a-7b9c8d6f467b-whisker-ca-bundle\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:31:49.084516 systemd[1]: Removed slice kubepods-besteffort-pod0515d912_1882_4a3f_805a_7b9c8d6f467b.slice - libcontainer container kubepods-besteffort-pod0515d912_1882_4a3f_805a_7b9c8d6f467b.slice. Jun 20 19:31:49.091719 kubelet[4320]: I0620 19:31:49.091673 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zchgs" podStartSLOduration=1.210582137 podStartE2EDuration="8.091658988s" podCreationTimestamp="2025-06-20 19:31:41 +0000 UTC" firstStartedPulling="2025-06-20 19:31:41.704616364 +0000 UTC m=+15.748111638" lastFinishedPulling="2025-06-20 19:31:48.585693175 +0000 UTC m=+22.629188489" observedRunningTime="2025-06-20 19:31:49.091232479 +0000 UTC m=+23.134727793" watchObservedRunningTime="2025-06-20 19:31:49.091658988 +0000 UTC m=+23.135154302" Jun 20 19:31:49.117798 systemd[1]: Created slice kubepods-besteffort-podac83d121_daac_4667_900f_c2b6088da7da.slice - libcontainer container kubepods-besteffort-podac83d121_daac_4667_900f_c2b6088da7da.slice. Jun 20 19:31:49.177533 kubelet[4320]: I0620 19:31:49.177500 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tf6\" (UniqueName: \"kubernetes.io/projected/ac83d121-daac-4667-900f-c2b6088da7da-kube-api-access-67tf6\") pod \"whisker-657cf88d9d-85cfm\" (UID: \"ac83d121-daac-4667-900f-c2b6088da7da\") " pod="calico-system/whisker-657cf88d9d-85cfm" Jun 20 19:31:49.177591 kubelet[4320]: I0620 19:31:49.177551 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac83d121-daac-4667-900f-c2b6088da7da-whisker-ca-bundle\") pod \"whisker-657cf88d9d-85cfm\" (UID: \"ac83d121-daac-4667-900f-c2b6088da7da\") " pod="calico-system/whisker-657cf88d9d-85cfm" Jun 20 19:31:49.177749 kubelet[4320]: I0620 19:31:49.177710 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac83d121-daac-4667-900f-c2b6088da7da-whisker-backend-key-pair\") pod \"whisker-657cf88d9d-85cfm\" (UID: \"ac83d121-daac-4667-900f-c2b6088da7da\") " pod="calico-system/whisker-657cf88d9d-85cfm" Jun 20 19:31:49.420789 containerd[2800]: time="2025-06-20T19:31:49.420703758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-657cf88d9d-85cfm,Uid:ac83d121-daac-4667-900f-c2b6088da7da,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:49.526711 systemd-networkd[2708]: calicaa5e84135c: Link UP Jun 20 19:31:49.526949 systemd-networkd[2708]: calicaa5e84135c: Gained carrier Jun 20 19:31:49.550165 containerd[2800]: 2025-06-20 19:31:49.439 [INFO][6128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:31:49.550165 containerd[2800]: 2025-06-20 19:31:49.454 [INFO][6128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0 whisker-657cf88d9d- calico-system ac83d121-daac-4667-900f-c2b6088da7da 903 0 2025-06-20 19:31:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:657cf88d9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 whisker-657cf88d9d-85cfm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicaa5e84135c [] [] }} ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-" Jun 20 19:31:49.550165 containerd[2800]: 2025-06-20 19:31:49.454 [INFO][6128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.550165 containerd[2800]: 2025-06-20 19:31:49.491 [INFO][6154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" HandleID="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Workload="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.491 [INFO][6154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" HandleID="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Workload="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-403d322406", "pod":"whisker-657cf88d9d-85cfm", "timestamp":"2025-06-20 19:31:49.491035583 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.491 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.491 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.491 [INFO][6154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.500 [INFO][6154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.504 [INFO][6154] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.507 [INFO][6154] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.509 [INFO][6154] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550298 containerd[2800]: 2025-06-20 19:31:49.511 [INFO][6154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.511 [INFO][6154] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.512 [INFO][6154] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2 Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.515 [INFO][6154] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.518 [INFO][6154] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.129/26] block=192.168.80.128/26 handle="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.518 [INFO][6154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.129/26] handle="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.518 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:31:49.550470 containerd[2800]: 2025-06-20 19:31:49.518 [INFO][6154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.129/26] IPv6=[] ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" HandleID="k8s-pod-network.6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Workload="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.550593 containerd[2800]: 2025-06-20 19:31:49.521 [INFO][6128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0", GenerateName:"whisker-657cf88d9d-", Namespace:"calico-system", SelfLink:"", UID:"ac83d121-daac-4667-900f-c2b6088da7da", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"657cf88d9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"whisker-657cf88d9d-85cfm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicaa5e84135c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:49.550593 containerd[2800]: 2025-06-20 19:31:49.521 [INFO][6128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.129/32] ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.550658 containerd[2800]: 2025-06-20 19:31:49.521 [INFO][6128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaa5e84135c ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.550658 containerd[2800]: 2025-06-20 19:31:49.527 [INFO][6128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.550696 containerd[2800]: 2025-06-20 19:31:49.528 [INFO][6128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0", GenerateName:"whisker-657cf88d9d-", Namespace:"calico-system", SelfLink:"", UID:"ac83d121-daac-4667-900f-c2b6088da7da", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"657cf88d9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2", Pod:"whisker-657cf88d9d-85cfm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicaa5e84135c", MAC:"42:c3:bf:7a:46:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:49.550742 containerd[2800]: 2025-06-20 19:31:49.548 [INFO][6128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" Namespace="calico-system" Pod="whisker-657cf88d9d-85cfm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-whisker--657cf88d9d--85cfm-eth0" Jun 20 19:31:49.561879 containerd[2800]: time="2025-06-20T19:31:49.561839284Z" level=info msg="connecting to shim 6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2" address="unix:///run/containerd/s/1b6bb0411dd7cffd7b8e3c17e3fff7898c922bebdf7f31eda2bfedc143b98de4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:49.573871 systemd[1]: var-lib-kubelet-pods-0515d912\x2d1882\x2d4a3f\x2d805a\x2d7b9c8d6f467b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2vbz4.mount: Deactivated successfully. Jun 20 19:31:49.573952 systemd[1]: var-lib-kubelet-pods-0515d912\x2d1882\x2d4a3f\x2d805a\x2d7b9c8d6f467b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:31:49.593030 systemd[1]: Started cri-containerd-6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2.scope - libcontainer container 6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2. Jun 20 19:31:49.632607 containerd[2800]: time="2025-06-20T19:31:49.632575775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-657cf88d9d-85cfm,Uid:ac83d121-daac-4667-900f-c2b6088da7da,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2\"" Jun 20 19:31:49.633699 containerd[2800]: time="2025-06-20T19:31:49.633679674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:31:50.035537 kubelet[4320]: I0620 19:31:50.035504 4320 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0515d912-1882-4a3f-805a-7b9c8d6f467b" path="/var/lib/kubelet/pods/0515d912-1882-4a3f-805a-7b9c8d6f467b/volumes" Jun 20 19:31:50.081988 kubelet[4320]: I0620 19:31:50.081966 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:31:50.139733 containerd[2800]: time="2025-06-20T19:31:50.139686665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 20 19:31:50.139733 containerd[2800]: time="2025-06-20T19:31:50.139707229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:50.140454 containerd[2800]: time="2025-06-20T19:31:50.140431460Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:50.142198 containerd[2800]: time="2025-06-20T19:31:50.142178850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:50.142872 containerd[2800]: time="2025-06-20T19:31:50.142857515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 509.145756ms" Jun 20 19:31:50.142902 containerd[2800]: time="2025-06-20T19:31:50.142878118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 20 19:31:50.144851 containerd[2800]: time="2025-06-20T19:31:50.144823498Z" level=info msg="CreateContainer within sandbox \"6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:31:50.148290 containerd[2800]: time="2025-06-20T19:31:50.148253548Z" level=info msg="Container 9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:50.151667 containerd[2800]: time="2025-06-20T19:31:50.151636390Z" level=info msg="CreateContainer within sandbox \"6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c\"" Jun 20 19:31:50.151977 containerd[2800]: time="2025-06-20T19:31:50.151942637Z" level=info msg="StartContainer for \"9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c\"" Jun 20 19:31:50.152946 containerd[2800]: time="2025-06-20T19:31:50.152926389Z" level=info msg="connecting to shim 9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c" address="unix:///run/containerd/s/1b6bb0411dd7cffd7b8e3c17e3fff7898c922bebdf7f31eda2bfedc143b98de4" protocol=ttrpc version=3 Jun 20 19:31:50.182034 systemd[1]: Started cri-containerd-9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c.scope - libcontainer container 9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c. Jun 20 19:31:50.211507 containerd[2800]: time="2025-06-20T19:31:50.211473626Z" level=info msg="StartContainer for \"9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c\" returns successfully" Jun 20 19:31:50.212935 containerd[2800]: time="2025-06-20T19:31:50.212911167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:31:50.825962 systemd-networkd[2708]: calicaa5e84135c: Gained IPv6LL Jun 20 19:31:50.956215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285267758.mount: Deactivated successfully. Jun 20 19:31:50.959476 containerd[2800]: time="2025-06-20T19:31:50.959442432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:50.959750 containerd[2800]: time="2025-06-20T19:31:50.959483118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 20 19:31:50.960193 containerd[2800]: time="2025-06-20T19:31:50.960176865Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:50.961829 containerd[2800]: time="2025-06-20T19:31:50.961805437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:50.962463 containerd[2800]: time="2025-06-20T19:31:50.962448776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 749.508204ms" Jun 20 19:31:50.962491 containerd[2800]: time="2025-06-20T19:31:50.962470179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 20 19:31:50.964533 containerd[2800]: time="2025-06-20T19:31:50.964510934Z" level=info msg="CreateContainer within sandbox \"6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:31:50.968172 containerd[2800]: time="2025-06-20T19:31:50.968146735Z" level=info msg="Container 71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:50.971766 containerd[2800]: time="2025-06-20T19:31:50.971742090Z" level=info msg="CreateContainer within sandbox \"6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5\"" Jun 20 19:31:50.972123 containerd[2800]: time="2025-06-20T19:31:50.972100946Z" level=info msg="StartContainer for \"71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5\"" Jun 20 19:31:50.973098 containerd[2800]: time="2025-06-20T19:31:50.973078057Z" level=info msg="connecting to shim 71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5" address="unix:///run/containerd/s/1b6bb0411dd7cffd7b8e3c17e3fff7898c922bebdf7f31eda2bfedc143b98de4" protocol=ttrpc version=3 Jun 20 19:31:51.007042 systemd[1]: Started cri-containerd-71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5.scope - libcontainer container 71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5. Jun 20 19:31:51.036444 containerd[2800]: time="2025-06-20T19:31:51.036417603Z" level=info msg="StartContainer for \"71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5\" returns successfully" Jun 20 19:31:51.093064 kubelet[4320]: I0620 19:31:51.092976 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-657cf88d9d-85cfm" podStartSLOduration=0.763422919 podStartE2EDuration="2.092962901s" podCreationTimestamp="2025-06-20 19:31:49 +0000 UTC" firstStartedPulling="2025-06-20 19:31:49.633479922 +0000 UTC m=+23.676975196" lastFinishedPulling="2025-06-20 19:31:50.963019904 +0000 UTC m=+25.006515178" observedRunningTime="2025-06-20 19:31:51.092741029 +0000 UTC m=+25.136236343" watchObservedRunningTime="2025-06-20 19:31:51.092962901 +0000 UTC m=+25.136458215" Jun 20 19:31:58.034228 containerd[2800]: time="2025-06-20T19:31:58.034173611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpq5r,Uid:76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549,Namespace:kube-system,Attempt:0,}" Jun 20 19:31:58.034712 containerd[2800]: time="2025-06-20T19:31:58.034338908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b9958ddd-65rvm,Uid:6959dc23-1f17-4656-bbb0-c17e5889e419,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:31:58.115532 systemd-networkd[2708]: caliebcd7ed2235: Link UP Jun 20 19:31:58.115793 systemd-networkd[2708]: caliebcd7ed2235: Gained carrier Jun 20 19:31:58.123929 containerd[2800]: 2025-06-20 19:31:58.051 [INFO][6962] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:31:58.123929 containerd[2800]: 2025-06-20 19:31:58.067 [INFO][6962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0 coredns-674b8bbfcf- kube-system 76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549 823 0 2025-06-20 19:31:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 coredns-674b8bbfcf-cpq5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebcd7ed2235 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-" Jun 20 19:31:58.123929 containerd[2800]: 2025-06-20 19:31:58.067 [INFO][6962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.123929 containerd[2800]: 2025-06-20 19:31:58.087 [INFO][7015] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" HandleID="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Workload="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.087 [INFO][7015] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" HandleID="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Workload="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-403d322406", "pod":"coredns-674b8bbfcf-cpq5r", "timestamp":"2025-06-20 19:31:58.087557997 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.087 [INFO][7015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.087 [INFO][7015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.087 [INFO][7015] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.096 [INFO][7015] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.099 [INFO][7015] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.103 [INFO][7015] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.104 [INFO][7015] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124076 containerd[2800]: 2025-06-20 19:31:58.105 [INFO][7015] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.105 [INFO][7015] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.106 [INFO][7015] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.109 [INFO][7015] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.112 [INFO][7015] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.130/26] block=192.168.80.128/26 handle="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.112 [INFO][7015] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.130/26] handle="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.112 [INFO][7015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:31:58.124262 containerd[2800]: 2025-06-20 19:31:58.112 [INFO][7015] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.130/26] IPv6=[] ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" HandleID="k8s-pod-network.f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Workload="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.124460 containerd[2800]: 2025-06-20 19:31:58.114 [INFO][6962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"coredns-674b8bbfcf-cpq5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebcd7ed2235", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:58.124460 containerd[2800]: 2025-06-20 19:31:58.114 [INFO][6962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.130/32] ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.124460 containerd[2800]: 2025-06-20 19:31:58.114 [INFO][6962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebcd7ed2235 ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.124460 containerd[2800]: 2025-06-20 19:31:58.115 [INFO][6962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.124460 containerd[2800]: 2025-06-20 19:31:58.116 [INFO][6962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa", Pod:"coredns-674b8bbfcf-cpq5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebcd7ed2235", MAC:"5e:0f:1e:30:08:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:58.124460 containerd[2800]: 2025-06-20 19:31:58.122 [INFO][6962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpq5r" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--cpq5r-eth0" Jun 20 19:31:58.134077 containerd[2800]: time="2025-06-20T19:31:58.134049016Z" level=info msg="connecting to shim f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa" address="unix:///run/containerd/s/143deba86fbfa8e3883a37cb3d591256ccc561520c6f69ffe4004b13df021af9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:58.169964 systemd[1]: Started cri-containerd-f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa.scope - libcontainer container f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa. Jun 20 19:31:58.200935 containerd[2800]: time="2025-06-20T19:31:58.200908502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpq5r,Uid:76b3a8a1-ea32-4d26-95c5-1d8d2b0dc549,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa\"" Jun 20 19:31:58.203368 containerd[2800]: time="2025-06-20T19:31:58.203341959Z" level=info msg="CreateContainer within sandbox \"f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:31:58.207697 containerd[2800]: time="2025-06-20T19:31:58.207667494Z" level=info msg="Container 84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:58.210189 containerd[2800]: time="2025-06-20T19:31:58.210162157Z" level=info msg="CreateContainer within sandbox \"f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15\"" Jun 20 19:31:58.210491 containerd[2800]: time="2025-06-20T19:31:58.210472030Z" level=info msg="StartContainer for \"84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15\"" Jun 20 19:31:58.211210 containerd[2800]: time="2025-06-20T19:31:58.211189626Z" level=info msg="connecting to shim 84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15" address="unix:///run/containerd/s/143deba86fbfa8e3883a37cb3d591256ccc561520c6f69ffe4004b13df021af9" protocol=ttrpc version=3 Jun 20 19:31:58.216913 systemd-networkd[2708]: cali8a1e972e7f1: Link UP Jun 20 19:31:58.217141 systemd-networkd[2708]: cali8a1e972e7f1: Gained carrier Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.052 [INFO][6968] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.067 [INFO][6968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0 calico-apiserver-9b9958ddd- calico-apiserver 6959dc23-1f17-4656-bbb0-c17e5889e419 833 0 2025-06-20 19:31:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b9958ddd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 calico-apiserver-9b9958ddd-65rvm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a1e972e7f1 [] [] }} ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.068 [INFO][6968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.088 [INFO][7017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" HandleID="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.088 [INFO][7017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" HandleID="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005227b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-403d322406", "pod":"calico-apiserver-9b9958ddd-65rvm", "timestamp":"2025-06-20 19:31:58.088046048 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.088 [INFO][7017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.112 [INFO][7017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.112 [INFO][7017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.196 [INFO][7017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.200 [INFO][7017] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.204 [INFO][7017] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.205 [INFO][7017] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.206 [INFO][7017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.207 [INFO][7017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.208 [INFO][7017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.210 [INFO][7017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.213 [INFO][7017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.131/26] block=192.168.80.128/26 handle="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.213 [INFO][7017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.131/26] handle="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.213 [INFO][7017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:31:58.224390 containerd[2800]: 2025-06-20 19:31:58.213 [INFO][7017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.131/26] IPv6=[] ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" HandleID="k8s-pod-network.7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.224817 containerd[2800]: 2025-06-20 19:31:58.215 [INFO][6968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0", GenerateName:"calico-apiserver-9b9958ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6959dc23-1f17-4656-bbb0-c17e5889e419", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b9958ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"calico-apiserver-9b9958ddd-65rvm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a1e972e7f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:58.224817 containerd[2800]: 2025-06-20 19:31:58.215 [INFO][6968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.131/32] ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.224817 containerd[2800]: 2025-06-20 19:31:58.215 [INFO][6968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a1e972e7f1 ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.224817 containerd[2800]: 2025-06-20 19:31:58.217 [INFO][6968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.224817 containerd[2800]: 2025-06-20 19:31:58.217 [INFO][6968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0", GenerateName:"calico-apiserver-9b9958ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6959dc23-1f17-4656-bbb0-c17e5889e419", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b9958ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec", Pod:"calico-apiserver-9b9958ddd-65rvm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a1e972e7f1", MAC:"66:c2:fd:e0:9b:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:58.224817 containerd[2800]: 2025-06-20 19:31:58.222 [INFO][6968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-65rvm" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--65rvm-eth0" Jun 20 19:31:58.233993 containerd[2800]: time="2025-06-20T19:31:58.233948224Z" level=info msg="connecting to shim 7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec" address="unix:///run/containerd/s/2dbecc6ff1fae9370c18586e899d9ba78d68b34deb931cdcaf9c97eac0eeeec3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:58.237989 systemd[1]: Started cri-containerd-84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15.scope - libcontainer container 84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15. Jun 20 19:31:58.246894 systemd[1]: Started cri-containerd-7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec.scope - libcontainer container 7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec. Jun 20 19:31:58.261242 containerd[2800]: time="2025-06-20T19:31:58.261211137Z" level=info msg="StartContainer for \"84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15\" returns successfully" Jun 20 19:31:58.272279 containerd[2800]: time="2025-06-20T19:31:58.272254741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b9958ddd-65rvm,Uid:6959dc23-1f17-4656-bbb0-c17e5889e419,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec\"" Jun 20 19:31:58.273401 containerd[2800]: time="2025-06-20T19:31:58.273381860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:31:59.034316 containerd[2800]: time="2025-06-20T19:31:59.034272055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f884559d4-mkbr4,Uid:87fee9d4-61a9-48c9-a930-d56092cce167,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:59.034719 containerd[2800]: time="2025-06-20T19:31:59.034273135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw4bd,Uid:e3bdbe06-b50d-4210-852c-5930face1fa3,Namespace:calico-system,Attempt:0,}" Jun 20 19:31:59.041017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643643719.mount: Deactivated successfully. Jun 20 19:31:59.116931 kubelet[4320]: I0620 19:31:59.116880 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cpq5r" podStartSLOduration=28.116863065 podStartE2EDuration="28.116863065s" podCreationTimestamp="2025-06-20 19:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:31:59.11651775 +0000 UTC m=+33.160013064" watchObservedRunningTime="2025-06-20 19:31:59.116863065 +0000 UTC m=+33.160358379" Jun 20 19:31:59.123502 systemd-networkd[2708]: calid3e2b4c6d35: Link UP Jun 20 19:31:59.123898 systemd-networkd[2708]: calid3e2b4c6d35: Gained carrier Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.052 [INFO][7273] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.063 [INFO][7273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0 calico-kube-controllers-f884559d4- calico-system 87fee9d4-61a9-48c9-a930-d56092cce167 831 0 2025-06-20 19:31:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f884559d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 calico-kube-controllers-f884559d4-mkbr4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3e2b4c6d35 [] [] }} ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.063 [INFO][7273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.084 [INFO][7317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" HandleID="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Workload="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.084 [INFO][7317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" HandleID="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Workload="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400045c170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-403d322406", "pod":"calico-kube-controllers-f884559d4-mkbr4", "timestamp":"2025-06-20 19:31:59.08399035 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.084 [INFO][7317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.084 [INFO][7317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.084 [INFO][7317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.092 [INFO][7317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.095 [INFO][7317] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.098 [INFO][7317] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.100 [INFO][7317] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.101 [INFO][7317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.101 [INFO][7317] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.102 [INFO][7317] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.116 [INFO][7317] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.120 [INFO][7317] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.132/26] block=192.168.80.128/26 handle="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.120 [INFO][7317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.132/26] handle="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.120 [INFO][7317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:31:59.130396 containerd[2800]: 2025-06-20 19:31:59.120 [INFO][7317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.132/26] IPv6=[] ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" HandleID="k8s-pod-network.788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Workload="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.130895 containerd[2800]: 2025-06-20 19:31:59.122 [INFO][7273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0", GenerateName:"calico-kube-controllers-f884559d4-", Namespace:"calico-system", SelfLink:"", UID:"87fee9d4-61a9-48c9-a930-d56092cce167", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f884559d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"calico-kube-controllers-f884559d4-mkbr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e2b4c6d35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:59.130895 containerd[2800]: 2025-06-20 19:31:59.122 [INFO][7273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.132/32] ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.130895 containerd[2800]: 2025-06-20 19:31:59.122 [INFO][7273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3e2b4c6d35 ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.130895 containerd[2800]: 2025-06-20 19:31:59.123 [INFO][7273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.130895 containerd[2800]: 2025-06-20 19:31:59.124 [INFO][7273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0", GenerateName:"calico-kube-controllers-f884559d4-", Namespace:"calico-system", SelfLink:"", UID:"87fee9d4-61a9-48c9-a930-d56092cce167", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f884559d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe", Pod:"calico-kube-controllers-f884559d4-mkbr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e2b4c6d35", MAC:"f6:5b:4f:63:23:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:59.130895 containerd[2800]: 2025-06-20 19:31:59.129 [INFO][7273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" Namespace="calico-system" Pod="calico-kube-controllers-f884559d4-mkbr4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--kube--controllers--f884559d4--mkbr4-eth0" Jun 20 19:31:59.141783 containerd[2800]: time="2025-06-20T19:31:59.141739134Z" level=info msg="connecting to shim 788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe" address="unix:///run/containerd/s/1ec053687a1cce6228ae30e5454a136493b3736fe3a15cddffa2fefd6c9dac76" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:59.172971 systemd[1]: Started cri-containerd-788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe.scope - libcontainer container 788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe. Jun 20 19:31:59.199613 containerd[2800]: time="2025-06-20T19:31:59.199584608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f884559d4-mkbr4,Uid:87fee9d4-61a9-48c9-a930-d56092cce167,Namespace:calico-system,Attempt:0,} returns sandbox id \"788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe\"" Jun 20 19:31:59.210304 containerd[2800]: time="2025-06-20T19:31:59.210276286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:59.210359 containerd[2800]: time="2025-06-20T19:31:59.210318251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 20 19:31:59.211028 containerd[2800]: time="2025-06-20T19:31:59.211006640Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:59.212923 containerd[2800]: time="2025-06-20T19:31:59.212899951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:31:59.213432 containerd[2800]: time="2025-06-20T19:31:59.213410442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 940.00018ms" Jun 20 19:31:59.213456 containerd[2800]: time="2025-06-20T19:31:59.213439245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 19:31:59.214095 systemd-networkd[2708]: cali0f22e6dac09: Link UP Jun 20 19:31:59.214272 systemd-networkd[2708]: cali0f22e6dac09: Gained carrier Jun 20 19:31:59.214309 containerd[2800]: time="2025-06-20T19:31:59.214273410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:31:59.215539 containerd[2800]: time="2025-06-20T19:31:59.215517815Z" level=info msg="CreateContainer within sandbox \"7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:31:59.219233 containerd[2800]: time="2025-06-20T19:31:59.219201747Z" level=info msg="Container 23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.052 [INFO][7275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.067 [INFO][7275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0 csi-node-driver- calico-system e3bdbe06-b50d-4210-852c-5930face1fa3 696 0 2025-06-20 19:31:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 csi-node-driver-tw4bd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f22e6dac09 [] [] }} ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.067 [INFO][7275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.086 [INFO][7323] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" HandleID="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Workload="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.086 [INFO][7323] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" HandleID="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Workload="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e15f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-403d322406", "pod":"csi-node-driver-tw4bd", "timestamp":"2025-06-20 19:31:59.086536966 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.086 [INFO][7323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.120 [INFO][7323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.120 [INFO][7323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.193 [INFO][7323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.197 [INFO][7323] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.201 [INFO][7323] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.202 [INFO][7323] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.204 [INFO][7323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.204 [INFO][7323] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.205 [INFO][7323] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876 Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.207 [INFO][7323] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.211 [INFO][7323] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.133/26] block=192.168.80.128/26 handle="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.211 [INFO][7323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.133/26] handle="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" host="ci-4344.1.0-a-403d322406" Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.211 [INFO][7323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:31:59.221679 containerd[2800]: 2025-06-20 19:31:59.211 [INFO][7323] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.133/26] IPv6=[] ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" HandleID="k8s-pod-network.2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Workload="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.222167 containerd[2800]: 2025-06-20 19:31:59.212 [INFO][7275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e3bdbe06-b50d-4210-852c-5930face1fa3", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"csi-node-driver-tw4bd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f22e6dac09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:59.222167 containerd[2800]: 2025-06-20 19:31:59.212 [INFO][7275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.133/32] ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.222167 containerd[2800]: 2025-06-20 19:31:59.212 [INFO][7275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f22e6dac09 ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.222167 containerd[2800]: 2025-06-20 19:31:59.214 [INFO][7275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.222167 containerd[2800]: 2025-06-20 19:31:59.214 [INFO][7275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e3bdbe06-b50d-4210-852c-5930face1fa3", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876", Pod:"csi-node-driver-tw4bd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f22e6dac09", MAC:"4a:ef:50:86:f5:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:31:59.222167 containerd[2800]: 2025-06-20 19:31:59.220 [INFO][7275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" Namespace="calico-system" Pod="csi-node-driver-tw4bd" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-csi--node--driver--tw4bd-eth0" Jun 20 19:31:59.222463 containerd[2800]: time="2025-06-20T19:31:59.222434673Z" level=info msg="CreateContainer within sandbox \"7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4\"" Jun 20 19:31:59.222774 containerd[2800]: time="2025-06-20T19:31:59.222758025Z" level=info msg="StartContainer for \"23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4\"" Jun 20 19:31:59.223741 containerd[2800]: time="2025-06-20T19:31:59.223720562Z" level=info msg="connecting to shim 23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4" address="unix:///run/containerd/s/2dbecc6ff1fae9370c18586e899d9ba78d68b34deb931cdcaf9c97eac0eeeec3" protocol=ttrpc version=3 Jun 20 19:31:59.231261 containerd[2800]: time="2025-06-20T19:31:59.231227279Z" level=info msg="connecting to shim 2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876" address="unix:///run/containerd/s/fe77240b82dd20c368534cdaffaf83a788be9ebaf4d2279596162f590cafa59b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:31:59.254038 systemd[1]: Started cri-containerd-23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4.scope - libcontainer container 23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4. Jun 20 19:31:59.254756 kubelet[4320]: I0620 19:31:59.254730 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:31:59.256485 systemd[1]: Started cri-containerd-2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876.scope - libcontainer container 2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876. Jun 20 19:31:59.275232 containerd[2800]: time="2025-06-20T19:31:59.275203235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw4bd,Uid:e3bdbe06-b50d-4210-852c-5930face1fa3,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876\"" Jun 20 19:31:59.291302 containerd[2800]: time="2025-06-20T19:31:59.291239572Z" level=info msg="StartContainer for \"23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4\" returns successfully" Jun 20 19:31:59.338936 systemd-networkd[2708]: cali8a1e972e7f1: Gained IPv6LL Jun 20 19:31:59.580079 systemd-networkd[2708]: vxlan.calico: Link UP Jun 20 19:31:59.580084 systemd-networkd[2708]: vxlan.calico: Gained carrier Jun 20 19:31:59.913939 systemd-networkd[2708]: caliebcd7ed2235: Gained IPv6LL Jun 20 19:32:00.034355 containerd[2800]: time="2025-06-20T19:32:00.034096794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-pwr97,Uid:56131e8f-0d6e-4ded-a253-1effc765ab02,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:00.034355 containerd[2800]: time="2025-06-20T19:32:00.034097354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dw589,Uid:89829641-c7f4-45b6-a27e-6b93bdf68c55,Namespace:kube-system,Attempt:0,}" Jun 20 19:32:00.111360 kubelet[4320]: I0620 19:32:00.111307 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b9958ddd-65rvm" podStartSLOduration=20.17030137 podStartE2EDuration="21.111291691s" podCreationTimestamp="2025-06-20 19:31:39 +0000 UTC" firstStartedPulling="2025-06-20 19:31:58.273178078 +0000 UTC m=+32.316673392" lastFinishedPulling="2025-06-20 19:31:59.214168439 +0000 UTC m=+33.257663713" observedRunningTime="2025-06-20 19:32:00.110831367 +0000 UTC m=+34.154326681" watchObservedRunningTime="2025-06-20 19:32:00.111291691 +0000 UTC m=+34.154786965" Jun 20 19:32:00.118123 systemd-networkd[2708]: cali550aae95c58: Link UP Jun 20 19:32:00.119059 systemd-networkd[2708]: cali550aae95c58: Gained carrier Jun 20 19:32:00.126269 containerd[2800]: time="2025-06-20T19:32:00.126225814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:00.126566 containerd[2800]: time="2025-06-20T19:32:00.126535724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.067 [INFO][7875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0 coredns-674b8bbfcf- kube-system 89829641-c7f4-45b6-a27e-6b93bdf68c55 827 0 2025-06-20 19:31:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 coredns-674b8bbfcf-dw589 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali550aae95c58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.067 [INFO][7875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.087 [INFO][7918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" HandleID="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Workload="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.087 [INFO][7918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" HandleID="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Workload="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043e640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-403d322406", "pod":"coredns-674b8bbfcf-dw589", "timestamp":"2025-06-20 19:32:00.087709973 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.087 [INFO][7918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.087 [INFO][7918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.087 [INFO][7918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.096 [INFO][7918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.100 [INFO][7918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.104 [INFO][7918] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.106 [INFO][7918] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.107 [INFO][7918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.107 [INFO][7918] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.109 [INFO][7918] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903 Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.111 [INFO][7918] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.115 [INFO][7918] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.134/26] block=192.168.80.128/26 handle="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.115 [INFO][7918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.134/26] handle="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.115 [INFO][7918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:00.127955 containerd[2800]: 2025-06-20 19:32:00.115 [INFO][7918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.134/26] IPv6=[] ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" HandleID="k8s-pod-network.2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Workload="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.128404 containerd[2800]: 2025-06-20 19:32:00.116 [INFO][7875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89829641-c7f4-45b6-a27e-6b93bdf68c55", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"coredns-674b8bbfcf-dw589", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali550aae95c58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:00.128404 containerd[2800]: 2025-06-20 19:32:00.116 [INFO][7875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.134/32] ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.128404 containerd[2800]: 2025-06-20 19:32:00.117 [INFO][7875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali550aae95c58 ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.128404 containerd[2800]: 2025-06-20 19:32:00.118 [INFO][7875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.128404 containerd[2800]: 2025-06-20 19:32:00.118 [INFO][7875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89829641-c7f4-45b6-a27e-6b93bdf68c55", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903", Pod:"coredns-674b8bbfcf-dw589", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali550aae95c58", MAC:"d6:e6:2a:8c:3a:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:00.128404 containerd[2800]: 2025-06-20 19:32:00.126 [INFO][7875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" Namespace="kube-system" Pod="coredns-674b8bbfcf-dw589" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-coredns--674b8bbfcf--dw589-eth0" Jun 20 19:32:00.128404 containerd[2800]: time="2025-06-20T19:32:00.128231448Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:00.130195 containerd[2800]: time="2025-06-20T19:32:00.130168595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:00.130684 containerd[2800]: time="2025-06-20T19:32:00.130658603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 916.35547ms" Jun 20 19:32:00.130709 containerd[2800]: time="2025-06-20T19:32:00.130690366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 20 19:32:00.131491 containerd[2800]: time="2025-06-20T19:32:00.131477722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:32:00.136493 containerd[2800]: time="2025-06-20T19:32:00.136474204Z" level=info msg="CreateContainer within sandbox \"788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:32:00.138500 containerd[2800]: time="2025-06-20T19:32:00.138471877Z" level=info msg="connecting to shim 2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903" address="unix:///run/containerd/s/c146153428b939e18acb0d2aa00791fbcfefa89a69636b9d118a1b207e8fe3e0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:00.140912 containerd[2800]: time="2025-06-20T19:32:00.140882510Z" level=info msg="Container 4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:00.144268 containerd[2800]: time="2025-06-20T19:32:00.144242875Z" level=info msg="CreateContainer within sandbox \"788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\"" Jun 20 19:32:00.144613 containerd[2800]: time="2025-06-20T19:32:00.144594069Z" level=info msg="StartContainer for \"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\"" Jun 20 19:32:00.145601 containerd[2800]: time="2025-06-20T19:32:00.145578564Z" level=info msg="connecting to shim 4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85" address="unix:///run/containerd/s/1ec053687a1cce6228ae30e5454a136493b3736fe3a15cddffa2fefd6c9dac76" protocol=ttrpc version=3 Jun 20 19:32:00.174987 systemd[1]: Started cri-containerd-2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903.scope - libcontainer container 2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903. Jun 20 19:32:00.177389 systemd[1]: Started cri-containerd-4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85.scope - libcontainer container 4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85. Jun 20 19:32:00.201713 containerd[2800]: time="2025-06-20T19:32:00.201681304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dw589,Uid:89829641-c7f4-45b6-a27e-6b93bdf68c55,Namespace:kube-system,Attempt:0,} returns sandbox id \"2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903\"" Jun 20 19:32:00.204008 containerd[2800]: time="2025-06-20T19:32:00.203982487Z" level=info msg="CreateContainer within sandbox \"2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:32:00.206251 containerd[2800]: time="2025-06-20T19:32:00.206227823Z" level=info msg="StartContainer for \"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" returns successfully" Jun 20 19:32:00.208327 containerd[2800]: time="2025-06-20T19:32:00.208301744Z" level=info msg="Container b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:00.208756 kubelet[4320]: I0620 19:32:00.208736 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:00.210961 containerd[2800]: time="2025-06-20T19:32:00.210938439Z" level=info msg="CreateContainer within sandbox \"2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8\"" Jun 20 19:32:00.211236 containerd[2800]: time="2025-06-20T19:32:00.211218066Z" level=info msg="StartContainer for \"b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8\"" Jun 20 19:32:00.211963 containerd[2800]: time="2025-06-20T19:32:00.211941615Z" level=info msg="connecting to shim b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8" address="unix:///run/containerd/s/c146153428b939e18acb0d2aa00791fbcfefa89a69636b9d118a1b207e8fe3e0" protocol=ttrpc version=3 Jun 20 19:32:00.232060 systemd-networkd[2708]: cali3eafb8e28bf: Link UP Jun 20 19:32:00.232275 systemd-networkd[2708]: cali3eafb8e28bf: Gained carrier Jun 20 19:32:00.238060 systemd[1]: Started cri-containerd-b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8.scope - libcontainer container b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8. Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.068 [INFO][7873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0 calico-apiserver-796757f947- calico-apiserver 56131e8f-0d6e-4ded-a253-1effc765ab02 830 0 2025-06-20 19:31:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:796757f947 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 calico-apiserver-796757f947-pwr97 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3eafb8e28bf [] [] }} ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.068 [INFO][7873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.088 [INFO][7920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.088 [INFO][7920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043c7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-403d322406", "pod":"calico-apiserver-796757f947-pwr97", "timestamp":"2025-06-20 19:32:00.088230704 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.088 [INFO][7920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.115 [INFO][7920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.115 [INFO][7920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.196 [INFO][7920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.200 [INFO][7920] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.216 [INFO][7920] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.218 [INFO][7920] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.220 [INFO][7920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.220 [INFO][7920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.221 [INFO][7920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.223 [INFO][7920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.228 [INFO][7920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.135/26] block=192.168.80.128/26 handle="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.229 [INFO][7920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.135/26] handle="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.229 [INFO][7920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:00.240035 containerd[2800]: 2025-06-20 19:32:00.229 [INFO][7920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.135/26] IPv6=[] ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.240471 containerd[2800]: 2025-06-20 19:32:00.230 [INFO][7873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0", GenerateName:"calico-apiserver-796757f947-", Namespace:"calico-apiserver", SelfLink:"", UID:"56131e8f-0d6e-4ded-a253-1effc765ab02", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796757f947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"calico-apiserver-796757f947-pwr97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3eafb8e28bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:00.240471 containerd[2800]: 2025-06-20 19:32:00.230 [INFO][7873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.135/32] ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.240471 containerd[2800]: 2025-06-20 19:32:00.230 [INFO][7873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3eafb8e28bf ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.240471 containerd[2800]: 2025-06-20 19:32:00.232 [INFO][7873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.240471 containerd[2800]: 2025-06-20 19:32:00.233 [INFO][7873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0", GenerateName:"calico-apiserver-796757f947-", Namespace:"calico-apiserver", SelfLink:"", UID:"56131e8f-0d6e-4ded-a253-1effc765ab02", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796757f947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc", Pod:"calico-apiserver-796757f947-pwr97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3eafb8e28bf", MAC:"9a:e4:64:03:e3:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:00.240471 containerd[2800]: 2025-06-20 19:32:00.238 [INFO][7873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-pwr97" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:00.250906 containerd[2800]: time="2025-06-20T19:32:00.250868136Z" level=info msg="connecting to shim 9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" address="unix:///run/containerd/s/5aa4843badd69b51b6e9841a118f295e30e3eeb8f38643d69000e596297597d7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:00.259321 containerd[2800]: time="2025-06-20T19:32:00.259293990Z" level=info msg="StartContainer for \"b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8\" returns successfully" Jun 20 19:32:00.281021 containerd[2800]: time="2025-06-20T19:32:00.280979565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"ea0c455b947d0d9991ab7387b37fb114f52ad27c27cc1eb796cbfc07cfb92353\" pid:8100 exit_status:1 exited_at:{seconds:1750447920 nanos:280693898}" Jun 20 19:32:00.285034 systemd[1]: Started cri-containerd-9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc.scope - libcontainer container 9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc. Jun 20 19:32:00.312947 containerd[2800]: time="2025-06-20T19:32:00.312906130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-pwr97,Uid:56131e8f-0d6e-4ded-a253-1effc765ab02,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\"" Jun 20 19:32:00.315437 containerd[2800]: time="2025-06-20T19:32:00.315412572Z" level=info msg="CreateContainer within sandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:32:00.318760 containerd[2800]: time="2025-06-20T19:32:00.318734253Z" level=info msg="Container 1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:00.321760 containerd[2800]: time="2025-06-20T19:32:00.321737143Z" level=info msg="CreateContainer within sandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\"" Jun 20 19:32:00.322077 containerd[2800]: time="2025-06-20T19:32:00.322056974Z" level=info msg="StartContainer for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\"" Jun 20 19:32:00.323008 containerd[2800]: time="2025-06-20T19:32:00.322983183Z" level=info msg="connecting to shim 1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7" address="unix:///run/containerd/s/5aa4843badd69b51b6e9841a118f295e30e3eeb8f38643d69000e596297597d7" protocol=ttrpc version=3 Jun 20 19:32:00.337593 containerd[2800]: time="2025-06-20T19:32:00.337556151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"3702664d506f2efff6c5a3e8995268a8f388b9f5102a1b660377e9b29601e105\" pid:8249 exit_status:1 exited_at:{seconds:1750447920 nanos:337258123}" Jun 20 19:32:00.350999 systemd[1]: Started cri-containerd-1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7.scope - libcontainer container 1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7. Jun 20 19:32:00.380319 containerd[2800]: time="2025-06-20T19:32:00.380292440Z" level=info msg="StartContainer for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" returns successfully" Jun 20 19:32:00.682943 systemd-networkd[2708]: cali0f22e6dac09: Gained IPv6LL Jun 20 19:32:00.704211 containerd[2800]: time="2025-06-20T19:32:00.704173171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:00.704293 containerd[2800]: time="2025-06-20T19:32:00.704188532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 20 19:32:00.704834 containerd[2800]: time="2025-06-20T19:32:00.704812073Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:00.706274 containerd[2800]: time="2025-06-20T19:32:00.706253732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:00.706953 containerd[2800]: time="2025-06-20T19:32:00.706901875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 575.402671ms" Jun 20 19:32:00.706980 containerd[2800]: time="2025-06-20T19:32:00.706957960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 20 19:32:00.708917 containerd[2800]: time="2025-06-20T19:32:00.708894867Z" level=info msg="CreateContainer within sandbox \"2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:32:00.713917 containerd[2800]: time="2025-06-20T19:32:00.713886429Z" level=info msg="Container ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:00.717739 containerd[2800]: time="2025-06-20T19:32:00.717710399Z" level=info msg="CreateContainer within sandbox \"2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059\"" Jun 20 19:32:00.718114 containerd[2800]: time="2025-06-20T19:32:00.718093636Z" level=info msg="StartContainer for \"ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059\"" Jun 20 19:32:00.719376 containerd[2800]: time="2025-06-20T19:32:00.719355278Z" level=info msg="connecting to shim ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059" address="unix:///run/containerd/s/fe77240b82dd20c368534cdaffaf83a788be9ebaf4d2279596162f590cafa59b" protocol=ttrpc version=3 Jun 20 19:32:00.749963 systemd[1]: Started cri-containerd-ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059.scope - libcontainer container ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059. Jun 20 19:32:00.785453 containerd[2800]: time="2025-06-20T19:32:00.785417980Z" level=info msg="StartContainer for \"ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059\" returns successfully" Jun 20 19:32:00.786243 containerd[2800]: time="2025-06-20T19:32:00.786224418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:32:00.873946 systemd-networkd[2708]: calid3e2b4c6d35: Gained IPv6LL Jun 20 19:32:01.034319 containerd[2800]: time="2025-06-20T19:32:01.034275810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9z2k6,Uid:4fdaa1f1-f07b-43b7-8b92-4c1a87677155,Namespace:calico-system,Attempt:0,}" Jun 20 19:32:01.034414 containerd[2800]: time="2025-06-20T19:32:01.034278570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-zxknb,Uid:fb8f50da-2df9-4446-a7af-2903440861ca,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:01.112276 kubelet[4320]: I0620 19:32:01.112248 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:01.114407 kubelet[4320]: I0620 19:32:01.114306 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dw589" podStartSLOduration=30.114292222 podStartE2EDuration="30.114292222s" podCreationTimestamp="2025-06-20 19:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:01.114283221 +0000 UTC m=+35.157778535" watchObservedRunningTime="2025-06-20 19:32:01.114292222 +0000 UTC m=+35.157787536" Jun 20 19:32:01.122911 systemd-networkd[2708]: cali41935d2924b: Link UP Jun 20 19:32:01.124022 systemd-networkd[2708]: cali41935d2924b: Gained carrier Jun 20 19:32:01.128566 kubelet[4320]: I0620 19:32:01.128516 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-796757f947-pwr97" podStartSLOduration=23.128497978 podStartE2EDuration="23.128497978s" podCreationTimestamp="2025-06-20 19:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:01.121206542 +0000 UTC m=+35.164701816" watchObservedRunningTime="2025-06-20 19:32:01.128497978 +0000 UTC m=+35.171993252" Jun 20 19:32:01.128705 kubelet[4320]: I0620 19:32:01.128633 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f884559d4-mkbr4" podStartSLOduration=19.197789622 podStartE2EDuration="20.12862923s" podCreationTimestamp="2025-06-20 19:31:41 +0000 UTC" firstStartedPulling="2025-06-20 19:31:59.200513142 +0000 UTC m=+33.244008416" lastFinishedPulling="2025-06-20 19:32:00.13135275 +0000 UTC m=+34.174848024" observedRunningTime="2025-06-20 19:32:01.128385327 +0000 UTC m=+35.171880641" watchObservedRunningTime="2025-06-20 19:32:01.12862923 +0000 UTC m=+35.172124584" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.072 [INFO][8398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0 goldmane-5bd85449d4- calico-system 4fdaa1f1-f07b-43b7-8b92-4c1a87677155 829 0 2025-06-20 19:31:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 goldmane-5bd85449d4-9z2k6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali41935d2924b [] [] }} ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.072 [INFO][8398] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" HandleID="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Workload="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" HandleID="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Workload="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000d81b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-403d322406", "pod":"goldmane-5bd85449d4-9z2k6", "timestamp":"2025-06-20 19:32:01.092403354 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.101 [INFO][8445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.104 [INFO][8445] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.108 [INFO][8445] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.109 [INFO][8445] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.111 [INFO][8445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.112 [INFO][8445] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.113 [INFO][8445] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8 Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.115 [INFO][8445] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.120 [INFO][8445] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.136/26] block=192.168.80.128/26 handle="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.120 [INFO][8445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.136/26] handle="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.120 [INFO][8445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:01.131823 containerd[2800]: 2025-06-20 19:32:01.120 [INFO][8445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.136/26] IPv6=[] ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" HandleID="k8s-pod-network.24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Workload="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.132303 containerd[2800]: 2025-06-20 19:32:01.121 [INFO][8398] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"4fdaa1f1-f07b-43b7-8b92-4c1a87677155", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"goldmane-5bd85449d4-9z2k6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41935d2924b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:01.132303 containerd[2800]: 2025-06-20 19:32:01.121 [INFO][8398] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.136/32] ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.132303 containerd[2800]: 2025-06-20 19:32:01.121 [INFO][8398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41935d2924b ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.132303 containerd[2800]: 2025-06-20 19:32:01.124 [INFO][8398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.132303 containerd[2800]: 2025-06-20 19:32:01.124 [INFO][8398] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"4fdaa1f1-f07b-43b7-8b92-4c1a87677155", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8", Pod:"goldmane-5bd85449d4-9z2k6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41935d2924b", MAC:"0e:4b:f5:60:ac:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:01.132303 containerd[2800]: 2025-06-20 19:32:01.129 [INFO][8398] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" Namespace="calico-system" Pod="goldmane-5bd85449d4-9z2k6" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-goldmane--5bd85449d4--9z2k6-eth0" Jun 20 19:32:01.141838 containerd[2800]: time="2025-06-20T19:32:01.141795090Z" level=info msg="connecting to shim 24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8" address="unix:///run/containerd/s/6b9ace10a95b70d191bfa41c1c1f0619be1726126e1b315841d51e145d8d4074" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:01.148959 containerd[2800]: time="2025-06-20T19:32:01.148931031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"f2b26acaa2ca972324c18d58e3d3f9896478b1d4eef8db056dd2c71c303c04fd\" pid:8501 exited_at:{seconds:1750447921 nanos:148253008}" Jun 20 19:32:01.170057 systemd[1]: Started cri-containerd-24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8.scope - libcontainer container 24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8. Jun 20 19:32:01.193935 systemd-networkd[2708]: vxlan.calico: Gained IPv6LL Jun 20 19:32:01.203189 containerd[2800]: time="2025-06-20T19:32:01.203150293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-9z2k6,Uid:4fdaa1f1-f07b-43b7-8b92-4c1a87677155,Namespace:calico-system,Attempt:0,} returns sandbox id \"24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8\"" Jun 20 19:32:01.224969 systemd-networkd[2708]: cali64498d7e235: Link UP Jun 20 19:32:01.225183 systemd-networkd[2708]: cali64498d7e235: Gained carrier Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.073 [INFO][8400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0 calico-apiserver-796757f947- calico-apiserver fb8f50da-2df9-4446-a7af-2903440861ca 832 0 2025-06-20 19:31:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:796757f947 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 calico-apiserver-796757f947-zxknb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64498d7e235 [] [] }} ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.073 [INFO][8400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-403d322406", "pod":"calico-apiserver-796757f947-zxknb", "timestamp":"2025-06-20 19:32:01.092713383 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.092 [INFO][8447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.120 [INFO][8447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.120 [INFO][8447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.201 [INFO][8447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.205 [INFO][8447] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.208 [INFO][8447] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.210 [INFO][8447] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.212 [INFO][8447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.212 [INFO][8447] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.213 [INFO][8447] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.216 [INFO][8447] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.221 [INFO][8447] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.137/26] block=192.168.80.128/26 handle="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.221 [INFO][8447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.137/26] handle="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.221 [INFO][8447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:01.232503 containerd[2800]: 2025-06-20 19:32:01.221 [INFO][8447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.137/26] IPv6=[] ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.232979 containerd[2800]: 2025-06-20 19:32:01.222 [INFO][8400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0", GenerateName:"calico-apiserver-796757f947-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb8f50da-2df9-4446-a7af-2903440861ca", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796757f947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"calico-apiserver-796757f947-zxknb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64498d7e235", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:01.232979 containerd[2800]: 2025-06-20 19:32:01.222 [INFO][8400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.137/32] ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.232979 containerd[2800]: 2025-06-20 19:32:01.222 [INFO][8400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64498d7e235 ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.232979 containerd[2800]: 2025-06-20 19:32:01.225 [INFO][8400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.232979 containerd[2800]: 2025-06-20 19:32:01.225 [INFO][8400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0", GenerateName:"calico-apiserver-796757f947-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb8f50da-2df9-4446-a7af-2903440861ca", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 31, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796757f947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4", Pod:"calico-apiserver-796757f947-zxknb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64498d7e235", MAC:"ba:d9:fe:d9:04:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:01.232979 containerd[2800]: 2025-06-20 19:32:01.230 [INFO][8400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Namespace="calico-apiserver" Pod="calico-apiserver-796757f947-zxknb" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:32:01.244144 containerd[2800]: time="2025-06-20T19:32:01.244100406Z" level=info msg="connecting to shim 98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" address="unix:///run/containerd/s/1183aefd4b880769873950c45c0111026959703a8f6733b089d6db369ab600ca" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:01.271043 systemd[1]: Started cri-containerd-98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4.scope - libcontainer container 98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4. Jun 20 19:32:01.297763 containerd[2800]: time="2025-06-20T19:32:01.297686570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796757f947-zxknb,Uid:fb8f50da-2df9-4446-a7af-2903440861ca,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\"" Jun 20 19:32:01.300083 containerd[2800]: time="2025-06-20T19:32:01.300056870Z" level=info msg="CreateContainer within sandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:32:01.300342 containerd[2800]: time="2025-06-20T19:32:01.300321454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:01.300372 containerd[2800]: time="2025-06-20T19:32:01.300347817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 20 19:32:01.301027 containerd[2800]: time="2025-06-20T19:32:01.301001637Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:01.302334 containerd[2800]: time="2025-06-20T19:32:01.302308718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:01.302980 containerd[2800]: time="2025-06-20T19:32:01.302958779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 516.703598ms" Jun 20 19:32:01.303003 containerd[2800]: time="2025-06-20T19:32:01.302986101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 20 19:32:01.303362 containerd[2800]: time="2025-06-20T19:32:01.303339694Z" level=info msg="Container 9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:01.303588 containerd[2800]: time="2025-06-20T19:32:01.303570035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:32:01.304899 containerd[2800]: time="2025-06-20T19:32:01.304879396Z" level=info msg="CreateContainer within sandbox \"2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:32:01.306806 containerd[2800]: time="2025-06-20T19:32:01.306782093Z" level=info msg="CreateContainer within sandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\"" Jun 20 19:32:01.307080 containerd[2800]: time="2025-06-20T19:32:01.307062479Z" level=info msg="StartContainer for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\"" Jun 20 19:32:01.308012 containerd[2800]: time="2025-06-20T19:32:01.307990885Z" level=info msg="connecting to shim 9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe" address="unix:///run/containerd/s/1183aefd4b880769873950c45c0111026959703a8f6733b089d6db369ab600ca" protocol=ttrpc version=3 Jun 20 19:32:01.308822 containerd[2800]: time="2025-06-20T19:32:01.308796799Z" level=info msg="Container 3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:01.313223 containerd[2800]: time="2025-06-20T19:32:01.313196727Z" level=info msg="CreateContainer within sandbox \"2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b\"" Jun 20 19:32:01.313485 containerd[2800]: time="2025-06-20T19:32:01.313466672Z" level=info msg="StartContainer for \"3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b\"" Jun 20 19:32:01.314775 containerd[2800]: time="2025-06-20T19:32:01.314755231Z" level=info msg="connecting to shim 3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b" address="unix:///run/containerd/s/fe77240b82dd20c368534cdaffaf83a788be9ebaf4d2279596162f590cafa59b" protocol=ttrpc version=3 Jun 20 19:32:01.321941 systemd-networkd[2708]: cali550aae95c58: Gained IPv6LL Jun 20 19:32:01.334997 systemd[1]: Started cri-containerd-9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe.scope - libcontainer container 9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe. Jun 20 19:32:01.337435 systemd[1]: Started cri-containerd-3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b.scope - libcontainer container 3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b. Jun 20 19:32:01.364940 containerd[2800]: time="2025-06-20T19:32:01.364907957Z" level=info msg="StartContainer for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" returns successfully" Jun 20 19:32:01.365933 containerd[2800]: time="2025-06-20T19:32:01.365911090Z" level=info msg="StartContainer for \"3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b\" returns successfully" Jun 20 19:32:02.078055 kubelet[4320]: I0620 19:32:02.078031 4320 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:32:02.078055 kubelet[4320]: I0620 19:32:02.078060 4320 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:32:02.118378 kubelet[4320]: I0620 19:32:02.118349 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:02.123521 kubelet[4320]: I0620 19:32:02.123479 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-796757f947-zxknb" podStartSLOduration=24.123467167 podStartE2EDuration="24.123467167s" podCreationTimestamp="2025-06-20 19:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:02.123259988 +0000 UTC m=+36.166755302" watchObservedRunningTime="2025-06-20 19:32:02.123467167 +0000 UTC m=+36.166962481" Jun 20 19:32:02.140303 kubelet[4320]: I0620 19:32:02.140259 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tw4bd" podStartSLOduration=19.112820912 podStartE2EDuration="21.140245258s" podCreationTimestamp="2025-06-20 19:31:41 +0000 UTC" firstStartedPulling="2025-06-20 19:31:59.27604752 +0000 UTC m=+33.319542834" lastFinishedPulling="2025-06-20 19:32:01.303471866 +0000 UTC m=+35.346967180" observedRunningTime="2025-06-20 19:32:02.14004288 +0000 UTC m=+36.183538194" watchObservedRunningTime="2025-06-20 19:32:02.140245258 +0000 UTC m=+36.183740572" Jun 20 19:32:02.154024 systemd-networkd[2708]: cali3eafb8e28bf: Gained IPv6LL Jun 20 19:32:02.195423 containerd[2800]: time="2025-06-20T19:32:02.195385640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:02.195713 containerd[2800]: time="2025-06-20T19:32:02.195398641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 20 19:32:02.196091 containerd[2800]: time="2025-06-20T19:32:02.196072021Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:02.197797 containerd[2800]: time="2025-06-20T19:32:02.197776573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:02.198481 containerd[2800]: time="2025-06-20T19:32:02.198461314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 894.858036ms" Jun 20 19:32:02.198505 containerd[2800]: time="2025-06-20T19:32:02.198489276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 20 19:32:02.200399 containerd[2800]: time="2025-06-20T19:32:02.200382204Z" level=info msg="CreateContainer within sandbox \"24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:32:02.204178 containerd[2800]: time="2025-06-20T19:32:02.204155740Z" level=info msg="Container 65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:02.207675 containerd[2800]: time="2025-06-20T19:32:02.207652291Z" level=info msg="CreateContainer within sandbox \"24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\"" Jun 20 19:32:02.208049 containerd[2800]: time="2025-06-20T19:32:02.208024964Z" level=info msg="StartContainer for \"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\"" Jun 20 19:32:02.209055 containerd[2800]: time="2025-06-20T19:32:02.209033734Z" level=info msg="connecting to shim 65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722" address="unix:///run/containerd/s/6b9ace10a95b70d191bfa41c1c1f0619be1726126e1b315841d51e145d8d4074" protocol=ttrpc version=3 Jun 20 19:32:02.237970 systemd[1]: Started cri-containerd-65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722.scope - libcontainer container 65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722. Jun 20 19:32:02.267876 containerd[2800]: time="2025-06-20T19:32:02.267846562Z" level=info msg="StartContainer for \"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" returns successfully" Jun 20 19:32:02.985972 systemd-networkd[2708]: cali41935d2924b: Gained IPv6LL Jun 20 19:32:02.986241 systemd-networkd[2708]: cali64498d7e235: Gained IPv6LL Jun 20 19:32:03.130498 kubelet[4320]: I0620 19:32:03.130331 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-9z2k6" podStartSLOduration=21.135301433 podStartE2EDuration="22.130311223s" podCreationTimestamp="2025-06-20 19:31:41 +0000 UTC" firstStartedPulling="2025-06-20 19:32:01.204040976 +0000 UTC m=+35.247536290" lastFinishedPulling="2025-06-20 19:32:02.199050766 +0000 UTC m=+36.242546080" observedRunningTime="2025-06-20 19:32:03.129753656 +0000 UTC m=+37.173248970" watchObservedRunningTime="2025-06-20 19:32:03.130311223 +0000 UTC m=+37.173806537" Jun 20 19:32:04.122764 kubelet[4320]: I0620 19:32:04.122730 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:10.415758 kubelet[4320]: I0620 19:32:10.415624 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:10.970107 kubelet[4320]: I0620 19:32:10.970047 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:11.053167 containerd[2800]: time="2025-06-20T19:32:11.053134235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"25791b8be7637e26c726154afd99c7d33398fe4664fbcf8e260a0e43d176e4ae\" pid:8901 exit_status:1 exited_at:{seconds:1750447931 nanos:52878618}" Jun 20 19:32:11.114543 containerd[2800]: time="2025-06-20T19:32:11.114512218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"cbe84341fcd5961299536b7bf138180ed88e8df0591703db894b37988f3693a4\" pid:8938 exit_status:1 exited_at:{seconds:1750447931 nanos:114290404}" Jun 20 19:32:23.341409 kubelet[4320]: I0620 19:32:23.341318 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:23.369007 containerd[2800]: time="2025-06-20T19:32:23.368972183Z" level=info msg="StopContainer for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" with timeout 30 (s)" Jun 20 19:32:23.369301 containerd[2800]: time="2025-06-20T19:32:23.369279597Z" level=info msg="Stop container \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" with signal terminated" Jun 20 19:32:23.377663 systemd[1]: cri-containerd-1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7.scope: Deactivated successfully. Jun 20 19:32:23.379236 containerd[2800]: time="2025-06-20T19:32:23.379209507Z" level=info msg="received exit event container_id:\"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" id:\"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" pid:8299 exit_status:1 exited_at:{seconds:1750447943 nanos:378972336}" Jun 20 19:32:23.379272 containerd[2800]: time="2025-06-20T19:32:23.379235989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" id:\"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" pid:8299 exit_status:1 exited_at:{seconds:1750447943 nanos:378972336}" Jun 20 19:32:23.388840 systemd[1]: Created slice kubepods-besteffort-podcf2f1b00_cde0_42e8_b08c_0ad1d7669340.slice - libcontainer container kubepods-besteffort-podcf2f1b00_cde0_42e8_b08c_0ad1d7669340.slice. Jun 20 19:32:23.396396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7-rootfs.mount: Deactivated successfully. Jun 20 19:32:23.482592 kubelet[4320]: I0620 19:32:23.482555 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf2f1b00-cde0-42e8-b08c-0ad1d7669340-calico-apiserver-certs\") pod \"calico-apiserver-9b9958ddd-xh4zj\" (UID: \"cf2f1b00-cde0-42e8-b08c-0ad1d7669340\") " pod="calico-apiserver/calico-apiserver-9b9958ddd-xh4zj" Jun 20 19:32:23.482748 kubelet[4320]: I0620 19:32:23.482598 4320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqbbk\" (UniqueName: \"kubernetes.io/projected/cf2f1b00-cde0-42e8-b08c-0ad1d7669340-kube-api-access-fqbbk\") pod \"calico-apiserver-9b9958ddd-xh4zj\" (UID: \"cf2f1b00-cde0-42e8-b08c-0ad1d7669340\") " pod="calico-apiserver/calico-apiserver-9b9958ddd-xh4zj" Jun 20 19:32:23.691356 containerd[2800]: time="2025-06-20T19:32:23.691287209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b9958ddd-xh4zj,Uid:cf2f1b00-cde0-42e8-b08c-0ad1d7669340,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:32:23.724878 containerd[2800]: time="2025-06-20T19:32:23.724839958Z" level=info msg="StopContainer for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" returns successfully" Jun 20 19:32:23.725338 containerd[2800]: time="2025-06-20T19:32:23.725317101Z" level=info msg="StopPodSandbox for \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\"" Jun 20 19:32:23.725386 containerd[2800]: time="2025-06-20T19:32:23.725374663Z" level=info msg="Container to stop \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:32:23.730924 systemd[1]: cri-containerd-9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc.scope: Deactivated successfully. Jun 20 19:32:23.731422 containerd[2800]: time="2025-06-20T19:32:23.731401669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" id:\"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" pid:8235 exit_status:137 exited_at:{seconds:1750447943 nanos:731191339}" Jun 20 19:32:23.750184 containerd[2800]: time="2025-06-20T19:32:23.750121995Z" level=error msg="ttrpc: failed to handle message" error="context canceled" stream=59 Jun 20 19:32:23.750221 containerd[2800]: time="2025-06-20T19:32:23.750183758Z" level=info msg="shim disconnected" id=9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc namespace=k8s.io Jun 20 19:32:23.750221 containerd[2800]: time="2025-06-20T19:32:23.750196639Z" level=warning msg="cleaning up after shim disconnected" id=9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc namespace=k8s.io Jun 20 19:32:23.750221 containerd[2800]: time="2025-06-20T19:32:23.750203319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:32:23.751838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc-rootfs.mount: Deactivated successfully. Jun 20 19:32:23.761175 containerd[2800]: time="2025-06-20T19:32:23.761147318Z" level=info msg="received exit event sandbox_id:\"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" exit_status:137 exited_at:{seconds:1750447943 nanos:731191339}" Jun 20 19:32:23.797725 systemd-networkd[2708]: cali3eafb8e28bf: Link DOWN Jun 20 19:32:23.797729 systemd-networkd[2708]: cali3eafb8e28bf: Lost carrier Jun 20 19:32:23.800266 systemd-networkd[2708]: cali91a1c83ce33: Link UP Jun 20 19:32:23.800480 systemd-networkd[2708]: cali91a1c83ce33: Gained carrier Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.748 [INFO][9004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0 calico-apiserver-9b9958ddd- calico-apiserver cf2f1b00-cde0-42e8-b08c-0ad1d7669340 1134 0 2025-06-20 19:32:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b9958ddd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-403d322406 calico-apiserver-9b9958ddd-xh4zj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali91a1c83ce33 [] [] }} ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.748 [INFO][9004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.769 [INFO][9060] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" HandleID="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.769 [INFO][9060] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" HandleID="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-403d322406", "pod":"calico-apiserver-9b9958ddd-xh4zj", "timestamp":"2025-06-20 19:32:23.769071693 +0000 UTC"}, Hostname:"ci-4344.1.0-a-403d322406", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.769 [INFO][9060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.769 [INFO][9060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.769 [INFO][9060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-403d322406' Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.777 [INFO][9060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.781 [INFO][9060] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.784 [INFO][9060] ipam/ipam.go 511: Trying affinity for 192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.786 [INFO][9060] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.788 [INFO][9060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.788 [INFO][9060] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.789 [INFO][9060] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192 Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.791 [INFO][9060] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.795 [INFO][9060] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.138/26] block=192.168.80.128/26 handle="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.795 [INFO][9060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.138/26] handle="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" host="ci-4344.1.0-a-403d322406" Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.795 [INFO][9060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:23.807783 containerd[2800]: 2025-06-20 19:32:23.795 [INFO][9060] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.138/26] IPv6=[] ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" HandleID="k8s-pod-network.f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.808302 containerd[2800]: 2025-06-20 19:32:23.797 [INFO][9004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0", GenerateName:"calico-apiserver-9b9958ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf2f1b00-cde0-42e8-b08c-0ad1d7669340", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b9958ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"", Pod:"calico-apiserver-9b9958ddd-xh4zj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91a1c83ce33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:23.808302 containerd[2800]: 2025-06-20 19:32:23.797 [INFO][9004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.138/32] ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.808302 containerd[2800]: 2025-06-20 19:32:23.797 [INFO][9004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91a1c83ce33 ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.808302 containerd[2800]: 2025-06-20 19:32:23.800 [INFO][9004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.808302 containerd[2800]: 2025-06-20 19:32:23.800 [INFO][9004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0", GenerateName:"calico-apiserver-9b9958ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf2f1b00-cde0-42e8-b08c-0ad1d7669340", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b9958ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-403d322406", ContainerID:"f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192", Pod:"calico-apiserver-9b9958ddd-xh4zj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91a1c83ce33", MAC:"92:08:15:74:2f:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:32:23.808302 containerd[2800]: 2025-06-20 19:32:23.806 [INFO][9004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" Namespace="calico-apiserver" Pod="calico-apiserver-9b9958ddd-xh4zj" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--9b9958ddd--xh4zj-eth0" Jun 20 19:32:23.817907 containerd[2800]: time="2025-06-20T19:32:23.817875764Z" level=info msg="connecting to shim f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192" address="unix:///run/containerd/s/ca8f49a1502b8485d1504fb90f6e4ab0aa9693eed218beff173327e3a5830048" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:32:23.844977 systemd[1]: Started cri-containerd-f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192.scope - libcontainer container f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192. Jun 20 19:32:23.871581 containerd[2800]: time="2025-06-20T19:32:23.871553067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b9958ddd-xh4zj,Uid:cf2f1b00-cde0-42e8-b08c-0ad1d7669340,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192\"" Jun 20 19:32:23.874005 containerd[2800]: time="2025-06-20T19:32:23.873971141Z" level=info msg="CreateContainer within sandbox \"f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:32:23.877607 containerd[2800]: time="2025-06-20T19:32:23.877583312Z" level=info msg="Container 642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:32:23.880690 containerd[2800]: time="2025-06-20T19:32:23.880662578Z" level=info msg="CreateContainer within sandbox \"f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1\"" Jun 20 19:32:23.880948 containerd[2800]: time="2025-06-20T19:32:23.880927271Z" level=info msg="StartContainer for \"642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1\"" Jun 20 19:32:23.881846 containerd[2800]: time="2025-06-20T19:32:23.881824393Z" level=info msg="connecting to shim 642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1" address="unix:///run/containerd/s/ca8f49a1502b8485d1504fb90f6e4ab0aa9693eed218beff173327e3a5830048" protocol=ttrpc version=3 Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.796 [INFO][9096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.797 [INFO][9096] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" iface="eth0" netns="/var/run/netns/cni-6a69e45c-11f2-2c0d-726c-444a1ba9e7e4" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.797 [INFO][9096] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" iface="eth0" netns="/var/run/netns/cni-6a69e45c-11f2-2c0d-726c-444a1ba9e7e4" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.823 [INFO][9096] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" after=25.924828ms iface="eth0" netns="/var/run/netns/cni-6a69e45c-11f2-2c0d-726c-444a1ba9e7e4" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.823 [INFO][9096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.823 [INFO][9096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.840 [INFO][9161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.840 [INFO][9161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.840 [INFO][9161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.882 [INFO][9161] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.882 [INFO][9161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.883 [INFO][9161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:23.885974 containerd[2800]: 2025-06-20 19:32:23.884 [INFO][9096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:23.886266 containerd[2800]: time="2025-06-20T19:32:23.886203681Z" level=info msg="TearDown network for sandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" successfully" Jun 20 19:32:23.886266 containerd[2800]: time="2025-06-20T19:32:23.886228882Z" level=info msg="StopPodSandbox for \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" returns successfully" Jun 20 19:32:23.907996 systemd[1]: Started cri-containerd-642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1.scope - libcontainer container 642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1. Jun 20 19:32:23.937695 containerd[2800]: time="2025-06-20T19:32:23.937667558Z" level=info msg="StartContainer for \"642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1\" returns successfully" Jun 20 19:32:23.986055 kubelet[4320]: I0620 19:32:23.986023 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56131e8f-0d6e-4ded-a253-1effc765ab02-calico-apiserver-certs\") pod \"56131e8f-0d6e-4ded-a253-1effc765ab02\" (UID: \"56131e8f-0d6e-4ded-a253-1effc765ab02\") " Jun 20 19:32:23.986055 kubelet[4320]: I0620 19:32:23.986064 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvp5\" (UniqueName: \"kubernetes.io/projected/56131e8f-0d6e-4ded-a253-1effc765ab02-kube-api-access-2bvp5\") pod \"56131e8f-0d6e-4ded-a253-1effc765ab02\" (UID: \"56131e8f-0d6e-4ded-a253-1effc765ab02\") " Jun 20 19:32:23.988310 kubelet[4320]: I0620 19:32:23.988285 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56131e8f-0d6e-4ded-a253-1effc765ab02-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "56131e8f-0d6e-4ded-a253-1effc765ab02" (UID: "56131e8f-0d6e-4ded-a253-1effc765ab02"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:32:23.988381 kubelet[4320]: I0620 19:32:23.988356 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56131e8f-0d6e-4ded-a253-1effc765ab02-kube-api-access-2bvp5" (OuterVolumeSpecName: "kube-api-access-2bvp5") pod "56131e8f-0d6e-4ded-a253-1effc765ab02" (UID: "56131e8f-0d6e-4ded-a253-1effc765ab02"). InnerVolumeSpecName "kube-api-access-2bvp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:32:24.039640 systemd[1]: Removed slice kubepods-besteffort-pod56131e8f_0d6e_4ded_a253_1effc765ab02.slice - libcontainer container kubepods-besteffort-pod56131e8f_0d6e_4ded_a253_1effc765ab02.slice. Jun 20 19:32:24.086524 kubelet[4320]: I0620 19:32:24.086497 4320 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56131e8f-0d6e-4ded-a253-1effc765ab02-calico-apiserver-certs\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:32:24.086524 kubelet[4320]: I0620 19:32:24.086519 4320 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2bvp5\" (UniqueName: \"kubernetes.io/projected/56131e8f-0d6e-4ded-a253-1effc765ab02-kube-api-access-2bvp5\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:32:24.157333 kubelet[4320]: I0620 19:32:24.157308 4320 scope.go:117] "RemoveContainer" containerID="1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7" Jun 20 19:32:24.158798 containerd[2800]: time="2025-06-20T19:32:24.158769208Z" level=info msg="RemoveContainer for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\"" Jun 20 19:32:24.161225 containerd[2800]: time="2025-06-20T19:32:24.161201081Z" level=info msg="RemoveContainer for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" returns successfully" Jun 20 19:32:24.161400 kubelet[4320]: I0620 19:32:24.161375 4320 scope.go:117] "RemoveContainer" containerID="1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7" Jun 20 19:32:24.161572 containerd[2800]: time="2025-06-20T19:32:24.161538257Z" level=error msg="ContainerStatus for \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\": not found" Jun 20 19:32:24.161692 kubelet[4320]: E0620 19:32:24.161673 4320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\": not found" containerID="1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7" Jun 20 19:32:24.161732 kubelet[4320]: I0620 19:32:24.161699 4320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7"} err="failed to get container status \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7\": not found" Jun 20 19:32:24.182327 kubelet[4320]: I0620 19:32:24.182285 4320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b9958ddd-xh4zj" podStartSLOduration=1.18227022 podStartE2EDuration="1.18227022s" podCreationTimestamp="2025-06-20 19:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:32:24.182006128 +0000 UTC m=+58.225501442" watchObservedRunningTime="2025-06-20 19:32:24.18227022 +0000 UTC m=+58.225765534" Jun 20 19:32:24.399024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc-shm.mount: Deactivated successfully. Jun 20 19:32:24.399110 systemd[1]: run-netns-cni\x2d6a69e45c\x2d11f2\x2d2c0d\x2d726c\x2d444a1ba9e7e4.mount: Deactivated successfully. Jun 20 19:32:24.399164 systemd[1]: var-lib-kubelet-pods-56131e8f\x2d0d6e\x2d4ded\x2da253\x2d1effc765ab02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2bvp5.mount: Deactivated successfully. Jun 20 19:32:24.399216 systemd[1]: var-lib-kubelet-pods-56131e8f\x2d0d6e\x2d4ded\x2da253\x2d1effc765ab02-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:32:25.065969 systemd-networkd[2708]: cali91a1c83ce33: Gained IPv6LL Jun 20 19:32:25.160939 kubelet[4320]: I0620 19:32:25.160910 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:26.023679 containerd[2800]: time="2025-06-20T19:32:26.023642411Z" level=info msg="StopPodSandbox for \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\"" Jun 20 19:32:26.036107 kubelet[4320]: I0620 19:32:26.036076 4320 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56131e8f-0d6e-4ded-a253-1effc765ab02" path="/var/lib/kubelet/pods/56131e8f-0d6e-4ded-a253-1effc765ab02/volumes" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.054 [WARNING][9304] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.054 [INFO][9304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.054 [INFO][9304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" iface="eth0" netns="" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.054 [INFO][9304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.054 [INFO][9304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.071 [INFO][9325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.071 [INFO][9325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.071 [INFO][9325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.079 [WARNING][9325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.079 [INFO][9325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.080 [INFO][9325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:26.083375 containerd[2800]: 2025-06-20 19:32:26.081 [INFO][9304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.083695 containerd[2800]: time="2025-06-20T19:32:26.083412330Z" level=info msg="TearDown network for sandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" successfully" Jun 20 19:32:26.083695 containerd[2800]: time="2025-06-20T19:32:26.083444291Z" level=info msg="StopPodSandbox for \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" returns successfully" Jun 20 19:32:26.083870 containerd[2800]: time="2025-06-20T19:32:26.083840389Z" level=info msg="RemovePodSandbox for \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\"" Jun 20 19:32:26.083904 containerd[2800]: time="2025-06-20T19:32:26.083877711Z" level=info msg="Forcibly stopping sandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\"" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.117 [WARNING][9354] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.117 [INFO][9354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.117 [INFO][9354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" iface="eth0" netns="" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.117 [INFO][9354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.117 [INFO][9354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.134 [INFO][9374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.134 [INFO][9374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.134 [INFO][9374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.141 [WARNING][9374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.141 [INFO][9374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" HandleID="k8s-pod-network.9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--pwr97-eth0" Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.142 [INFO][9374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:32:26.146327 containerd[2800]: 2025-06-20 19:32:26.144 [INFO][9354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc" Jun 20 19:32:26.146608 containerd[2800]: time="2025-06-20T19:32:26.146391753Z" level=info msg="TearDown network for sandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" successfully" Jun 20 19:32:26.147982 containerd[2800]: time="2025-06-20T19:32:26.147959263Z" level=info msg="Ensure that sandbox 9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc in task-service has been cleanup successfully" Jun 20 19:32:26.150344 containerd[2800]: time="2025-06-20T19:32:26.150325209Z" level=info msg="RemovePodSandbox \"9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc\" returns successfully" Jun 20 19:32:26.715535 containerd[2800]: time="2025-06-20T19:32:26.715503062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"111c6545d5304f2da97e0c3b00ad5c4520fa52e13db78721f8f3906fecd56c21\" pid:9405 exited_at:{seconds:1750447946 nanos:714813311}" Jun 20 19:32:30.348813 containerd[2800]: time="2025-06-20T19:32:30.348780300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"223d08c7e22fa23dd54446b2a80db9dfec40539627c424e492f5a31304575e3c\" pid:9449 exited_at:{seconds:1750447950 nanos:348295159}" Jun 20 19:32:31.149269 containerd[2800]: time="2025-06-20T19:32:31.149219727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"c9eb9df52f99557a09e97b8ae937e31ca1cfaf94dfc2eb81203355958b4086d2\" pid:9483 exited_at:{seconds:1750447951 nanos:149032999}" Jun 20 19:32:41.126659 containerd[2800]: time="2025-06-20T19:32:41.126620190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"0a9c657711b96a8c2c6bd96a0d8318774bb92b8e27090cde2123241a49531390\" pid:9515 exited_at:{seconds:1750447961 nanos:126359940}" Jun 20 19:32:51.369230 containerd[2800]: time="2025-06-20T19:32:51.369196188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"bd688d457b8754bdb9d319ddba3a145f43a4e61d6fb7b00ad6ac14da1dd579ae\" pid:9561 exited_at:{seconds:1750447971 nanos:369021908}" Jun 20 19:32:59.795838 kubelet[4320]: I0620 19:32:59.795794 4320 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:32:59.822731 containerd[2800]: time="2025-06-20T19:32:59.822695966Z" level=info msg="StopContainer for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" with timeout 30 (s)" Jun 20 19:32:59.823058 containerd[2800]: time="2025-06-20T19:32:59.823032969Z" level=info msg="Stop container \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" with signal terminated" Jun 20 19:32:59.837198 systemd[1]: cri-containerd-9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe.scope: Deactivated successfully. Jun 20 19:32:59.837524 systemd[1]: cri-containerd-9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe.scope: Consumed 1.677s CPU time, 79.1M memory peak. Jun 20 19:32:59.839256 containerd[2800]: time="2025-06-20T19:32:59.839126494Z" level=info msg="received exit event container_id:\"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" id:\"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" pid:8675 exit_status:1 exited_at:{seconds:1750447979 nanos:838945892}" Jun 20 19:32:59.839256 containerd[2800]: time="2025-06-20T19:32:59.839241975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" id:\"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" pid:8675 exit_status:1 exited_at:{seconds:1750447979 nanos:838945892}" Jun 20 19:32:59.855577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe-rootfs.mount: Deactivated successfully. Jun 20 19:32:59.856456 containerd[2800]: time="2025-06-20T19:32:59.856415908Z" level=info msg="StopContainer for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" returns successfully" Jun 20 19:32:59.856800 containerd[2800]: time="2025-06-20T19:32:59.856769231Z" level=info msg="StopPodSandbox for \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\"" Jun 20 19:32:59.856846 containerd[2800]: time="2025-06-20T19:32:59.856831631Z" level=info msg="Container to stop \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:32:59.862051 systemd[1]: cri-containerd-98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4.scope: Deactivated successfully. Jun 20 19:32:59.863196 containerd[2800]: time="2025-06-20T19:32:59.863176120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" id:\"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" pid:8631 exit_status:137 exited_at:{seconds:1750447979 nanos:863001439}" Jun 20 19:32:59.882592 containerd[2800]: time="2025-06-20T19:32:59.882572911Z" level=info msg="shim disconnected" id=98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 namespace=k8s.io Jun 20 19:32:59.882704 containerd[2800]: time="2025-06-20T19:32:59.882593031Z" level=warning msg="cleaning up after shim disconnected" id=98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 namespace=k8s.io Jun 20 19:32:59.882704 containerd[2800]: time="2025-06-20T19:32:59.882619111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:32:59.883419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4-rootfs.mount: Deactivated successfully. Jun 20 19:32:59.891778 containerd[2800]: time="2025-06-20T19:32:59.891749222Z" level=info msg="received exit event sandbox_id:\"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" exit_status:137 exited_at:{seconds:1750447979 nanos:863001439}" Jun 20 19:32:59.893844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4-shm.mount: Deactivated successfully. Jun 20 19:32:59.928455 systemd-networkd[2708]: cali64498d7e235: Link DOWN Jun 20 19:32:59.928461 systemd-networkd[2708]: cali64498d7e235: Lost carrier Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.927 [INFO][9657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.927 [INFO][9657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" iface="eth0" netns="/var/run/netns/cni-3b62b6bc-87f8-3ca9-353d-1305611751e6" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.927 [INFO][9657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" iface="eth0" netns="/var/run/netns/cni-3b62b6bc-87f8-3ca9-353d-1305611751e6" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.950 [INFO][9657] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" after=22.144811ms iface="eth0" netns="/var/run/netns/cni-3b62b6bc-87f8-3ca9-353d-1305611751e6" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.950 [INFO][9657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.950 [INFO][9657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.968 [INFO][9684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.968 [INFO][9684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:32:59.968 [INFO][9684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:33:00.005 [INFO][9684] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:33:00.005 [INFO][9684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:33:00.006 [INFO][9684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:33:00.009221 containerd[2800]: 2025-06-20 19:33:00.007 [INFO][9657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:00.009655 containerd[2800]: time="2025-06-20T19:33:00.009392861Z" level=info msg="TearDown network for sandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" successfully" Jun 20 19:33:00.009655 containerd[2800]: time="2025-06-20T19:33:00.009418901Z" level=info msg="StopPodSandbox for \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" returns successfully" Jun 20 19:33:00.011538 systemd[1]: run-netns-cni\x2d3b62b6bc\x2d87f8\x2d3ca9\x2d353d\x2d1305611751e6.mount: Deactivated successfully. Jun 20 19:33:00.103385 kubelet[4320]: I0620 19:33:00.103287 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7tq\" (UniqueName: \"kubernetes.io/projected/fb8f50da-2df9-4446-a7af-2903440861ca-kube-api-access-xh7tq\") pod \"fb8f50da-2df9-4446-a7af-2903440861ca\" (UID: \"fb8f50da-2df9-4446-a7af-2903440861ca\") " Jun 20 19:33:00.103385 kubelet[4320]: I0620 19:33:00.103335 4320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb8f50da-2df9-4446-a7af-2903440861ca-calico-apiserver-certs\") pod \"fb8f50da-2df9-4446-a7af-2903440861ca\" (UID: \"fb8f50da-2df9-4446-a7af-2903440861ca\") " Jun 20 19:33:00.105666 kubelet[4320]: I0620 19:33:00.105632 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8f50da-2df9-4446-a7af-2903440861ca-kube-api-access-xh7tq" (OuterVolumeSpecName: "kube-api-access-xh7tq") pod "fb8f50da-2df9-4446-a7af-2903440861ca" (UID: "fb8f50da-2df9-4446-a7af-2903440861ca"). InnerVolumeSpecName "kube-api-access-xh7tq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:33:00.105700 kubelet[4320]: I0620 19:33:00.105669 4320 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8f50da-2df9-4446-a7af-2903440861ca-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "fb8f50da-2df9-4446-a7af-2903440861ca" (UID: "fb8f50da-2df9-4446-a7af-2903440861ca"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:33:00.107281 systemd[1]: var-lib-kubelet-pods-fb8f50da\x2d2df9\x2d4446\x2da7af\x2d2903440861ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxh7tq.mount: Deactivated successfully. Jun 20 19:33:00.107370 systemd[1]: var-lib-kubelet-pods-fb8f50da\x2d2df9\x2d4446\x2da7af\x2d2903440861ca-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:33:00.204144 kubelet[4320]: I0620 19:33:00.204121 4320 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb8f50da-2df9-4446-a7af-2903440861ca-calico-apiserver-certs\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:33:00.204200 kubelet[4320]: I0620 19:33:00.204144 4320 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xh7tq\" (UniqueName: \"kubernetes.io/projected/fb8f50da-2df9-4446-a7af-2903440861ca-kube-api-access-xh7tq\") on node \"ci-4344.1.0-a-403d322406\" DevicePath \"\"" Jun 20 19:33:00.217610 kubelet[4320]: I0620 19:33:00.217593 4320 scope.go:117] "RemoveContainer" containerID="9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe" Jun 20 19:33:00.219097 containerd[2800]: time="2025-06-20T19:33:00.219070264Z" level=info msg="RemoveContainer for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\"" Jun 20 19:33:00.221088 systemd[1]: Removed slice kubepods-besteffort-podfb8f50da_2df9_4446_a7af_2903440861ca.slice - libcontainer container kubepods-besteffort-podfb8f50da_2df9_4446_a7af_2903440861ca.slice. Jun 20 19:33:00.221193 systemd[1]: kubepods-besteffort-podfb8f50da_2df9_4446_a7af_2903440861ca.slice: Consumed 1.694s CPU time, 79.6M memory peak. Jun 20 19:33:00.221387 containerd[2800]: time="2025-06-20T19:33:00.221369084Z" level=info msg="RemoveContainer for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" returns successfully" Jun 20 19:33:00.221513 kubelet[4320]: I0620 19:33:00.221499 4320 scope.go:117] "RemoveContainer" containerID="9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe" Jun 20 19:33:00.221684 containerd[2800]: time="2025-06-20T19:33:00.221656606Z" level=error msg="ContainerStatus for \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\": not found" Jun 20 19:33:00.221798 kubelet[4320]: E0620 19:33:00.221772 4320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\": not found" containerID="9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe" Jun 20 19:33:00.221827 kubelet[4320]: I0620 19:33:00.221807 4320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe"} err="failed to get container status \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe\": not found" Jun 20 19:33:00.360296 containerd[2800]: time="2025-06-20T19:33:00.360179891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"8d4e1265994f6bbee37df0cac388754f14af6d6000b3f8afa063dd844a0d7717\" pid:9718 exited_at:{seconds:1750447980 nanos:359797328}" Jun 20 19:33:01.152005 containerd[2800]: time="2025-06-20T19:33:01.151969645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"5d704bd6931fa550db30fd8dbf6e1e2c4c5e7331d388a1fd435ee782220f0510\" pid:9755 exited_at:{seconds:1750447981 nanos:151821404}" Jun 20 19:33:02.035676 kubelet[4320]: I0620 19:33:02.035642 4320 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8f50da-2df9-4446-a7af-2903440861ca" path="/var/lib/kubelet/pods/fb8f50da-2df9-4446-a7af-2903440861ca/volumes" Jun 20 19:33:11.118617 containerd[2800]: time="2025-06-20T19:33:11.118577487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"89cf641a39c96ee202ce49d442db2ebdf4883ba1289567aa512af1e9c3514560\" pid:9794 exited_at:{seconds:1750447991 nanos:118356484}" Jun 20 19:33:26.153240 containerd[2800]: time="2025-06-20T19:33:26.153193920Z" level=info msg="StopPodSandbox for \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\"" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.183 [WARNING][9843] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.183 [INFO][9843] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.183 [INFO][9843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" iface="eth0" netns="" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.183 [INFO][9843] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.183 [INFO][9843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.199 [INFO][9862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.200 [INFO][9862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.200 [INFO][9862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.207 [WARNING][9862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.207 [INFO][9862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.208 [INFO][9862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:33:26.211587 containerd[2800]: 2025-06-20 19:33:26.210 [INFO][9843] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.211886 containerd[2800]: time="2025-06-20T19:33:26.211629271Z" level=info msg="TearDown network for sandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" successfully" Jun 20 19:33:26.211886 containerd[2800]: time="2025-06-20T19:33:26.211660192Z" level=info msg="StopPodSandbox for \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" returns successfully" Jun 20 19:33:26.212014 containerd[2800]: time="2025-06-20T19:33:26.211986878Z" level=info msg="RemovePodSandbox for \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\"" Jun 20 19:33:26.212042 containerd[2800]: time="2025-06-20T19:33:26.212026359Z" level=info msg="Forcibly stopping sandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\"" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.242 [WARNING][9889] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" WorkloadEndpoint="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.242 [INFO][9889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.242 [INFO][9889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" iface="eth0" netns="" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.242 [INFO][9889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.242 [INFO][9889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.260 [INFO][9909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.260 [INFO][9909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.260 [INFO][9909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.269 [WARNING][9909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.269 [INFO][9909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" HandleID="k8s-pod-network.98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Workload="ci--4344.1.0--a--403d322406-k8s-calico--apiserver--796757f947--zxknb-eth0" Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.270 [INFO][9909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:33:26.274239 containerd[2800]: 2025-06-20 19:33:26.272 [INFO][9889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4" Jun 20 19:33:26.274591 containerd[2800]: time="2025-06-20T19:33:26.274273745Z" level=info msg="TearDown network for sandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" successfully" Jun 20 19:33:26.275735 containerd[2800]: time="2025-06-20T19:33:26.275710733Z" level=info msg="Ensure that sandbox 98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 in task-service has been cleanup successfully" Jun 20 19:33:26.276469 containerd[2800]: time="2025-06-20T19:33:26.276445667Z" level=info msg="RemovePodSandbox \"98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4\" returns successfully" Jun 20 19:33:26.718632 containerd[2800]: time="2025-06-20T19:33:26.718588334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"7677e2c3da2159d5e259efeaddfe223bcfc21634bbdcd8e88f03493485c3d685\" pid:9940 exited_at:{seconds:1750448006 nanos:718343770}" Jun 20 19:33:30.353816 containerd[2800]: time="2025-06-20T19:33:30.353770945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"aa0455ea8bd65cd91d8e7da2f8abb532984f8a3429bd351ad22e48ad2e64a968\" pid:9978 exited_at:{seconds:1750448010 nanos:353514460}" Jun 20 19:33:31.146591 containerd[2800]: time="2025-06-20T19:33:31.146559664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"2a267b90b32d3e4b5563bb6b30625e0c3e4cd5d5f2155f087b53436de2b3f28c\" pid:10013 exited_at:{seconds:1750448011 nanos:146384741}" Jun 20 19:33:41.111012 containerd[2800]: time="2025-06-20T19:33:41.110971014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"72543e551dfe782de041837a3aed4533fbd67a4e8587b7beca340e2bfeeb81fe\" pid:10069 exited_at:{seconds:1750448021 nanos:110730969}" Jun 20 19:33:51.359082 containerd[2800]: time="2025-06-20T19:33:51.359005976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"8d03ec40773211c06263350410dcf1508874837ae628421073f24805a2c28f76\" pid:10119 exited_at:{seconds:1750448031 nanos:358837652}" Jun 20 19:34:00.359780 containerd[2800]: time="2025-06-20T19:34:00.359727621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"3f08350bc85430d3a4d6ef5352be8c7497284d1a78071cae5573178bd65ae2b4\" pid:10155 exited_at:{seconds:1750448040 nanos:359436494}" Jun 20 19:34:01.153683 containerd[2800]: time="2025-06-20T19:34:01.153654360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"c0986c0905e8f308a796af27f140d9c6728efb301780e2a539aa38d3a5de46c5\" pid:10191 exited_at:{seconds:1750448041 nanos:153473555}" Jun 20 19:34:11.124108 containerd[2800]: time="2025-06-20T19:34:11.124056687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"43ebc33e7eb12a3d8bc57b7fd52ef15c2714c758caac1bf5fede005c2c511a51\" pid:10222 exited_at:{seconds:1750448051 nanos:123834401}" Jun 20 19:34:26.724804 containerd[2800]: time="2025-06-20T19:34:26.724752340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"d2c085c739f3723b331359d477d87aaed89ee98838a964fd008d4d011be1c6de\" pid:10266 exited_at:{seconds:1750448066 nanos:724474212}" Jun 20 19:34:30.358196 containerd[2800]: time="2025-06-20T19:34:30.358154423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"f8a6130471e6c0c66860a156073e452501b88bc6aa7035cbeafd33383cdb85bb\" pid:10305 exited_at:{seconds:1750448070 nanos:357953658}" Jun 20 19:34:31.148761 containerd[2800]: time="2025-06-20T19:34:31.148725958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"7a4618ac677b3437a8599d0682fbc3f6ba67477d01016b029953b20ae6c8d492\" pid:10339 exited_at:{seconds:1750448071 nanos:148543353}" Jun 20 19:34:41.122351 containerd[2800]: time="2025-06-20T19:34:41.122299015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"bacd7080e8e215ff6e9893bd65248f4cb192ddb6076886e3eb05a479ea94104f\" pid:10375 exited_at:{seconds:1750448081 nanos:122041208}" Jun 20 19:34:51.361930 containerd[2800]: time="2025-06-20T19:34:51.361894060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"ba8194647f108d04e4b236deeb12fcec0c534ad5c8f2a4d7a8a93db5002676ca\" pid:10418 exited_at:{seconds:1750448091 nanos:361722065}" Jun 20 19:35:00.350100 containerd[2800]: time="2025-06-20T19:35:00.350058936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"1de35ae57888979c6299bf01b0bb102e9652d9f99c4b66b5d14914e96b70f2a8\" pid:10441 exited_at:{seconds:1750448100 nanos:349810621}" Jun 20 19:35:01.151670 containerd[2800]: time="2025-06-20T19:35:01.151641231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"8c7652a99706f2e837cb4d90d8f068e08dca72b59b57b731dea0af7330ae8198\" pid:10476 exited_at:{seconds:1750448101 nanos:151475954}" Jun 20 19:35:11.110926 containerd[2800]: time="2025-06-20T19:35:11.110886572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"73228866c8ee124e0f638a75d7e54c64cc8e2d0958be535d108240c2d14b7795\" pid:10504 exited_at:{seconds:1750448111 nanos:110556297}" Jun 20 19:35:26.715707 containerd[2800]: time="2025-06-20T19:35:26.715619035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"206598ef62edb3912a23ab578f05dc9951cdc4f1786a45821ff3868bda46861c\" pid:10570 exited_at:{seconds:1750448126 nanos:715393277}" Jun 20 19:35:30.357817 containerd[2800]: time="2025-06-20T19:35:30.357769426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"d0be29cd6e42136f5234e152e083163fc090bcebb03dbaaea9a643213866d950\" pid:10609 exited_at:{seconds:1750448130 nanos:357526947}" Jun 20 19:35:31.146971 containerd[2800]: time="2025-06-20T19:35:31.146943080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"213f194439a65c041a99ef6cfbba0dc7488c9b170074bb74d68a8ba8f80dee9a\" pid:10644 exited_at:{seconds:1750448131 nanos:146748561}" Jun 20 19:35:41.119370 containerd[2800]: time="2025-06-20T19:35:41.119333172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"383c247a40700328beede63092f3cb2ec55ceb3d77ec84a8ed935b9ccf259a60\" pid:10670 exited_at:{seconds:1750448141 nanos:119121772}" Jun 20 19:35:51.349686 containerd[2800]: time="2025-06-20T19:35:51.349631632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"f5230f06202b576fe3f8694f4190e37216fa0b462855dda02fcd17a3ffb7a944\" pid:10716 exited_at:{seconds:1750448151 nanos:349476832}" Jun 20 19:36:00.361395 containerd[2800]: time="2025-06-20T19:36:00.361354730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"985c480373a01127a2ff6fe70089fc8bea654d60d45ee43a4b101cbce69b6e28\" pid:10746 exited_at:{seconds:1750448160 nanos:361084888}" Jun 20 19:36:01.143888 containerd[2800]: time="2025-06-20T19:36:01.143833276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"24ef1463bcc5cf6de2a294f4b5645d147c2d4400043270ea11d07cfd97ee1e29\" pid:10791 exited_at:{seconds:1750448161 nanos:143654355}" Jun 20 19:36:11.119336 containerd[2800]: time="2025-06-20T19:36:11.119290827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"d12173582bcfc155c1704186857213c3af1ae05fb7ddfe89ad6f8eb69709cfe0\" pid:10830 exited_at:{seconds:1750448171 nanos:119083665}" Jun 20 19:36:22.352347 containerd[2800]: time="2025-06-20T19:36:22.352277097Z" level=warning msg="container event discarded" container=f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac type=CONTAINER_CREATED_EVENT Jun 20 19:36:22.363516 containerd[2800]: time="2025-06-20T19:36:22.363482892Z" level=warning msg="container event discarded" container=f049fc6fa3575793695eb290074d52d95b24c39c439d5120bbba5423a56aadac type=CONTAINER_STARTED_EVENT Jun 20 19:36:22.363555 containerd[2800]: time="2025-06-20T19:36:22.363515093Z" level=warning msg="container event discarded" container=16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0 type=CONTAINER_CREATED_EVENT Jun 20 19:36:22.363555 containerd[2800]: time="2025-06-20T19:36:22.363523973Z" level=warning msg="container event discarded" container=16448c50167a0ce057098819007dd2d154cb5f0e7f814c66641f8f4ae969f9f0 type=CONTAINER_STARTED_EVENT Jun 20 19:36:22.363555 containerd[2800]: time="2025-06-20T19:36:22.363531533Z" level=warning msg="container event discarded" container=31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823 type=CONTAINER_CREATED_EVENT Jun 20 19:36:22.363555 containerd[2800]: time="2025-06-20T19:36:22.363540253Z" level=warning msg="container event discarded" container=31d883fcd7fb4eb14dc213dea2fe88083b65f50644dc07fc7034a8f02242b823 type=CONTAINER_STARTED_EVENT Jun 20 19:36:22.363555 containerd[2800]: time="2025-06-20T19:36:22.363547373Z" level=warning msg="container event discarded" container=afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814 type=CONTAINER_CREATED_EVENT Jun 20 19:36:22.378725 containerd[2800]: time="2025-06-20T19:36:22.378700449Z" level=warning msg="container event discarded" container=51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068 type=CONTAINER_CREATED_EVENT Jun 20 19:36:22.378725 containerd[2800]: time="2025-06-20T19:36:22.378715009Z" level=warning msg="container event discarded" container=3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38 type=CONTAINER_CREATED_EVENT Jun 20 19:36:22.413883 containerd[2800]: time="2025-06-20T19:36:22.413859292Z" level=warning msg="container event discarded" container=afaa8d9286decd18ba9d277b4ccb8678bbfe528e522b222468a6230e7c06e814 type=CONTAINER_STARTED_EVENT Jun 20 19:36:22.413883 containerd[2800]: time="2025-06-20T19:36:22.413872772Z" level=warning msg="container event discarded" container=3676915de18ffbe520617da2302fc2c6168f1824d5fc19ff122ae92e5a3a3e38 type=CONTAINER_STARTED_EVENT Jun 20 19:36:22.413883 containerd[2800]: time="2025-06-20T19:36:22.413880052Z" level=warning msg="container event discarded" container=51d1d92ea25740886927918ef434c5e4f325bd408d7e5f9f31e4e6ccff1c1068 type=CONTAINER_STARTED_EVENT Jun 20 19:36:26.717922 containerd[2800]: time="2025-06-20T19:36:26.717883908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"72abe6398e368ef8596174f90c3a17e6642770277003f4a312b1853e3c107c28\" pid:10876 exited_at:{seconds:1750448186 nanos:717625905}" Jun 20 19:36:30.345191 containerd[2800]: time="2025-06-20T19:36:30.345146216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"50b84aae58c7295f5b048493aea46bd9b6a314a18f61dc175aa31b12bd1de03c\" pid:10915 exited_at:{seconds:1750448190 nanos:344947734}" Jun 20 19:36:31.153059 containerd[2800]: time="2025-06-20T19:36:31.153023189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"95717e1fe02f82546e06943f0104691e2af8c10a4af6f471983316df86ef94f7\" pid:10949 exited_at:{seconds:1750448191 nanos:152804986}" Jun 20 19:36:32.311908 containerd[2800]: time="2025-06-20T19:36:32.311836008Z" level=warning msg="container event discarded" container=e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f type=CONTAINER_CREATED_EVENT Jun 20 19:36:32.311908 containerd[2800]: time="2025-06-20T19:36:32.311892569Z" level=warning msg="container event discarded" container=e789c6839bffab8a3fa1095c94d2dabcd667a948ae8dc522a14330ac5d96d08f type=CONTAINER_STARTED_EVENT Jun 20 19:36:32.323082 containerd[2800]: time="2025-06-20T19:36:32.323052023Z" level=warning msg="container event discarded" container=0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781 type=CONTAINER_CREATED_EVENT Jun 20 19:36:32.342206 containerd[2800]: time="2025-06-20T19:36:32.342176854Z" level=warning msg="container event discarded" container=f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665 type=CONTAINER_CREATED_EVENT Jun 20 19:36:32.342206 containerd[2800]: time="2025-06-20T19:36:32.342196814Z" level=warning msg="container event discarded" container=f95a5726d723722c5f55c2bbbadd5ecd45491f2295d47e563677b53800853665 type=CONTAINER_STARTED_EVENT Jun 20 19:36:32.361431 containerd[2800]: time="2025-06-20T19:36:32.361405485Z" level=warning msg="container event discarded" container=0437f7cf0e5c0f984c78a86f17c6a09a5b877161a707a5acb00db2f91264e781 type=CONTAINER_STARTED_EVENT Jun 20 19:36:33.382174 containerd[2800]: time="2025-06-20T19:36:33.382134079Z" level=warning msg="container event discarded" container=a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30 type=CONTAINER_CREATED_EVENT Jun 20 19:36:33.429657 containerd[2800]: time="2025-06-20T19:36:33.429623098Z" level=warning msg="container event discarded" container=a0de9860b96237319df2c34358d54c0eb1f21d28dad64531e34eacad7f871a30 type=CONTAINER_STARTED_EVENT Jun 20 19:36:41.119449 containerd[2800]: time="2025-06-20T19:36:41.119411095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"3c528f7c8868c825f7bdc5ef23ee98c986215a2032e47bfaf971b5eefc1d4532\" pid:10974 exited_at:{seconds:1750448201 nanos:119056370}" Jun 20 19:36:41.524110 containerd[2800]: time="2025-06-20T19:36:41.524067350Z" level=warning msg="container event discarded" container=492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7 type=CONTAINER_CREATED_EVENT Jun 20 19:36:41.524246 containerd[2800]: time="2025-06-20T19:36:41.524220872Z" level=warning msg="container event discarded" container=492dddc5e66b5963bafe57c5c3e71e55487540f116e4a1dfad61d8310402c7c7 type=CONTAINER_STARTED_EVENT Jun 20 19:36:41.713465 containerd[2800]: time="2025-06-20T19:36:41.713387603Z" level=warning msg="container event discarded" container=6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671 type=CONTAINER_CREATED_EVENT Jun 20 19:36:41.713465 containerd[2800]: time="2025-06-20T19:36:41.713429364Z" level=warning msg="container event discarded" container=6cfb7555af62a52adc5aef1f707b7b803a7a8496a9e6ea7f893b152016af6671 type=CONTAINER_STARTED_EVENT Jun 20 19:36:42.584190 containerd[2800]: time="2025-06-20T19:36:42.584148176Z" level=warning msg="container event discarded" container=72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57 type=CONTAINER_CREATED_EVENT Jun 20 19:36:42.639373 containerd[2800]: time="2025-06-20T19:36:42.639334202Z" level=warning msg="container event discarded" container=72135cd426aff7cb8d467e4528fd12344084e25ce49b974b068667fb0ff27c57 type=CONTAINER_STARTED_EVENT Jun 20 19:36:42.992019 containerd[2800]: time="2025-06-20T19:36:42.991973530Z" level=warning msg="container event discarded" container=77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5 type=CONTAINER_CREATED_EVENT Jun 20 19:36:43.056215 containerd[2800]: time="2025-06-20T19:36:43.056178885Z" level=warning msg="container event discarded" container=77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5 type=CONTAINER_STARTED_EVENT Jun 20 19:36:43.357030 containerd[2800]: time="2025-06-20T19:36:43.356946672Z" level=warning msg="container event discarded" container=77938463749d581cc1a53651be15705f1fc05e5e4b0dfe6fecefe4db56421bd5 type=CONTAINER_STOPPED_EVENT Jun 20 19:36:45.327918 containerd[2800]: time="2025-06-20T19:36:45.327837283Z" level=warning msg="container event discarded" container=9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1 type=CONTAINER_CREATED_EVENT Jun 20 19:36:45.391157 containerd[2800]: time="2025-06-20T19:36:45.391120684Z" level=warning msg="container event discarded" container=9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1 type=CONTAINER_STARTED_EVENT Jun 20 19:36:45.917909 containerd[2800]: time="2025-06-20T19:36:45.917870576Z" level=warning msg="container event discarded" container=9fe6197251c17ac01e2c611bb4ec6bd482bdb0d4e376d2df3aa1d09b1fe1b3d1 type=CONTAINER_STOPPED_EVENT Jun 20 19:36:48.613804 containerd[2800]: time="2025-06-20T19:36:48.613725957Z" level=warning msg="container event discarded" container=72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d type=CONTAINER_CREATED_EVENT Jun 20 19:36:48.683992 containerd[2800]: time="2025-06-20T19:36:48.683960242Z" level=warning msg="container event discarded" container=72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d type=CONTAINER_STARTED_EVENT Jun 20 19:36:49.643047 containerd[2800]: time="2025-06-20T19:36:49.642976394Z" level=warning msg="container event discarded" container=6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2 type=CONTAINER_CREATED_EVENT Jun 20 19:36:49.643047 containerd[2800]: time="2025-06-20T19:36:49.643016195Z" level=warning msg="container event discarded" container=6f04dc648a5c11e1515b34daf5ca8445b3fbc0adcccb9cdea091620d0fe5a9e2 type=CONTAINER_STARTED_EVENT Jun 20 19:36:50.161885 containerd[2800]: time="2025-06-20T19:36:50.161834137Z" level=warning msg="container event discarded" container=9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c type=CONTAINER_CREATED_EVENT Jun 20 19:36:50.220070 containerd[2800]: time="2025-06-20T19:36:50.220034704Z" level=warning msg="container event discarded" container=9d6f4c4fd9114ca3f0196449d157d0a44435dfc8b0cd7b42c7e029d2eb4a439c type=CONTAINER_STARTED_EVENT Jun 20 19:36:50.982069 containerd[2800]: time="2025-06-20T19:36:50.982027026Z" level=warning msg="container event discarded" container=71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5 type=CONTAINER_CREATED_EVENT Jun 20 19:36:51.046346 containerd[2800]: time="2025-06-20T19:36:51.046310927Z" level=warning msg="container event discarded" container=71db15f632e1d56d403e2681a954f2ab4e7c46d737e7cbc2d163cd14688253e5 type=CONTAINER_STARTED_EVENT Jun 20 19:36:51.356769 containerd[2800]: time="2025-06-20T19:36:51.356653317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"531727da51f597ad6a3a97fe2d2da0f4a86619e48f25aced75198633def805dd\" pid:11041 exited_at:{seconds:1750448211 nanos:356451154}" Jun 20 19:36:58.211082 containerd[2800]: time="2025-06-20T19:36:58.211032750Z" level=warning msg="container event discarded" container=f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa type=CONTAINER_CREATED_EVENT Jun 20 19:36:58.211082 containerd[2800]: time="2025-06-20T19:36:58.211068031Z" level=warning msg="container event discarded" container=f0d66f405ca2ddfd48530bfb45b42a31f3f5fa8d7a47d5497fea322e15a263aa type=CONTAINER_STARTED_EVENT Jun 20 19:36:58.211082 containerd[2800]: time="2025-06-20T19:36:58.211079711Z" level=warning msg="container event discarded" container=84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15 type=CONTAINER_CREATED_EVENT Jun 20 19:36:58.271348 containerd[2800]: time="2025-06-20T19:36:58.271313161Z" level=warning msg="container event discarded" container=84dd9fe1e418bc57834dac7a4b435df48c3afcf5dfdfefbc624755dd931e4f15 type=CONTAINER_STARTED_EVENT Jun 20 19:36:58.282507 containerd[2800]: time="2025-06-20T19:36:58.282484134Z" level=warning msg="container event discarded" container=7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec type=CONTAINER_CREATED_EVENT Jun 20 19:36:58.282507 containerd[2800]: time="2025-06-20T19:36:58.282497334Z" level=warning msg="container event discarded" container=7d3a15711785e6fb4755f43d0db89b6ca9c8bfc40ed02ea885286c21270fa8ec type=CONTAINER_STARTED_EVENT Jun 20 19:36:59.210535 containerd[2800]: time="2025-06-20T19:36:59.210475451Z" level=warning msg="container event discarded" container=788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe type=CONTAINER_CREATED_EVENT Jun 20 19:36:59.210535 containerd[2800]: time="2025-06-20T19:36:59.210504891Z" level=warning msg="container event discarded" container=788871346598db7a55e4aaaa2eb0c371fc2beb711ee20a0b9224d4427cd44ffe type=CONTAINER_STARTED_EVENT Jun 20 19:36:59.232698 containerd[2800]: time="2025-06-20T19:36:59.232673596Z" level=warning msg="container event discarded" container=23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4 type=CONTAINER_CREATED_EVENT Jun 20 19:36:59.285917 containerd[2800]: time="2025-06-20T19:36:59.285877904Z" level=warning msg="container event discarded" container=2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876 type=CONTAINER_CREATED_EVENT Jun 20 19:36:59.285971 containerd[2800]: time="2025-06-20T19:36:59.285915784Z" level=warning msg="container event discarded" container=2cf6365a205bb20819ea830b584dca54e7c5f1b7101e4f6e2c2ccff2c8171876 type=CONTAINER_STARTED_EVENT Jun 20 19:36:59.301104 containerd[2800]: time="2025-06-20T19:36:59.301084540Z" level=warning msg="container event discarded" container=23f0a0bb9c323a154205176da1cc24bb6160a01863a2b00a139cf388c250c1c4 type=CONTAINER_STARTED_EVENT Jun 20 19:37:00.154494 containerd[2800]: time="2025-06-20T19:37:00.154460188Z" level=warning msg="container event discarded" container=4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85 type=CONTAINER_CREATED_EVENT Jun 20 19:37:00.212674 containerd[2800]: time="2025-06-20T19:37:00.212647659Z" level=warning msg="container event discarded" container=2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903 type=CONTAINER_CREATED_EVENT Jun 20 19:37:00.212724 containerd[2800]: time="2025-06-20T19:37:00.212679139Z" level=warning msg="container event discarded" container=2573b34ba595570d9fc80e8983f3a6ced73c3e58a8a399e7bdb696a4780c2903 type=CONTAINER_STARTED_EVENT Jun 20 19:37:00.212724 containerd[2800]: time="2025-06-20T19:37:00.212688220Z" level=warning msg="container event discarded" container=4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85 type=CONTAINER_STARTED_EVENT Jun 20 19:37:00.212724 containerd[2800]: time="2025-06-20T19:37:00.212696380Z" level=warning msg="container event discarded" container=b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8 type=CONTAINER_CREATED_EVENT Jun 20 19:37:00.268946 containerd[2800]: time="2025-06-20T19:37:00.268927820Z" level=warning msg="container event discarded" container=b483b623ad6f36864ce6e21378f29e0f25d7a8213a2db06172e72e630f7612d8 type=CONTAINER_STARTED_EVENT Jun 20 19:37:00.323569 containerd[2800]: time="2025-06-20T19:37:00.323541995Z" level=warning msg="container event discarded" container=9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc type=CONTAINER_CREATED_EVENT Jun 20 19:37:00.323569 containerd[2800]: time="2025-06-20T19:37:00.323568635Z" level=warning msg="container event discarded" container=9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc type=CONTAINER_STARTED_EVENT Jun 20 19:37:00.323664 containerd[2800]: time="2025-06-20T19:37:00.323578036Z" level=warning msg="container event discarded" container=1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7 type=CONTAINER_CREATED_EVENT Jun 20 19:37:00.351369 containerd[2800]: time="2025-06-20T19:37:00.351344110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"cfd64e53791cc5df40ed757e56514f952d80fde3d670cf94794580d937bf288a\" pid:11064 exited_at:{seconds:1750448220 nanos:351089346}" Jun 20 19:37:00.389858 containerd[2800]: time="2025-06-20T19:37:00.389799952Z" level=warning msg="container event discarded" container=1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7 type=CONTAINER_STARTED_EVENT Jun 20 19:37:00.727753 containerd[2800]: time="2025-06-20T19:37:00.727723003Z" level=warning msg="container event discarded" container=ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059 type=CONTAINER_CREATED_EVENT Jun 20 19:37:00.794845 containerd[2800]: time="2025-06-20T19:37:00.794824733Z" level=warning msg="container event discarded" container=ec776216da9e804639e6ae8fc9c1b6e3caf2ec6f310b44592e8909b78d973059 type=CONTAINER_STARTED_EVENT Jun 20 19:37:01.151948 containerd[2800]: time="2025-06-20T19:37:01.151888419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"06aba9cc21253131c88e6f9206bca480e6e3d1930edbc664639df23cef4689f6\" pid:11099 exited_at:{seconds:1750448221 nanos:151705096}" Jun 20 19:37:01.213963 containerd[2800]: time="2025-06-20T19:37:01.213937597Z" level=warning msg="container event discarded" container=24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8 type=CONTAINER_CREATED_EVENT Jun 20 19:37:01.213963 containerd[2800]: time="2025-06-20T19:37:01.213961877Z" level=warning msg="container event discarded" container=24c76595c04c4f8e1f5c4a64e6a543407514d43a019c5f5fdad07951a0a20ca8 type=CONTAINER_STARTED_EVENT Jun 20 19:37:01.308270 containerd[2800]: time="2025-06-20T19:37:01.308246763Z" level=warning msg="container event discarded" container=98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 type=CONTAINER_CREATED_EVENT Jun 20 19:37:01.308270 containerd[2800]: time="2025-06-20T19:37:01.308270563Z" level=warning msg="container event discarded" container=98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 type=CONTAINER_STARTED_EVENT Jun 20 19:37:01.308524 containerd[2800]: time="2025-06-20T19:37:01.308278843Z" level=warning msg="container event discarded" container=9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe type=CONTAINER_CREATED_EVENT Jun 20 19:37:01.322465 containerd[2800]: time="2025-06-20T19:37:01.322430386Z" level=warning msg="container event discarded" container=3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b type=CONTAINER_CREATED_EVENT Jun 20 19:37:01.374636 containerd[2800]: time="2025-06-20T19:37:01.374621649Z" level=warning msg="container event discarded" container=9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe type=CONTAINER_STARTED_EVENT Jun 20 19:37:01.374636 containerd[2800]: time="2025-06-20T19:37:01.374634849Z" level=warning msg="container event discarded" container=3729c7d697a7b89993d754d1a422c91f9081a93037fd9a133a72cec0859efd5b type=CONTAINER_STARTED_EVENT Jun 20 19:37:02.217898 containerd[2800]: time="2025-06-20T19:37:02.217873798Z" level=warning msg="container event discarded" container=65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722 type=CONTAINER_CREATED_EVENT Jun 20 19:37:02.278194 containerd[2800]: time="2025-06-20T19:37:02.278162554Z" level=warning msg="container event discarded" container=65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722 type=CONTAINER_STARTED_EVENT Jun 20 19:37:11.119980 containerd[2800]: time="2025-06-20T19:37:11.119939692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"f41f975c14d079bd82e19b18475ff59c99ed98b2474b518fedd657cfe7db59aa\" pid:11124 exited_at:{seconds:1750448231 nanos:119638327}" Jun 20 19:37:23.734897 containerd[2800]: time="2025-06-20T19:37:23.734820648Z" level=warning msg="container event discarded" container=1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7 type=CONTAINER_STOPPED_EVENT Jun 20 19:37:23.772092 containerd[2800]: time="2025-06-20T19:37:23.772046865Z" level=warning msg="container event discarded" container=9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc type=CONTAINER_STOPPED_EVENT Jun 20 19:37:23.882341 containerd[2800]: time="2025-06-20T19:37:23.882307169Z" level=warning msg="container event discarded" container=f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192 type=CONTAINER_CREATED_EVENT Jun 20 19:37:23.882341 containerd[2800]: time="2025-06-20T19:37:23.882330810Z" level=warning msg="container event discarded" container=f7e50d3212e355cb8f8eb37d2ed3fc16f927fd6cc08d1ef884312047d46f3192 type=CONTAINER_STARTED_EVENT Jun 20 19:37:23.882341 containerd[2800]: time="2025-06-20T19:37:23.882338410Z" level=warning msg="container event discarded" container=642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1 type=CONTAINER_CREATED_EVENT Jun 20 19:37:23.947602 containerd[2800]: time="2025-06-20T19:37:23.947580321Z" level=warning msg="container event discarded" container=642559119aa7bf7c3800e74a1fa00496eb2e3c09b4055daa2cb4637522ef46d1 type=CONTAINER_STARTED_EVENT Jun 20 19:37:24.171877 containerd[2800]: time="2025-06-20T19:37:24.171770727Z" level=warning msg="container event discarded" container=1115b614f635289dabe0d312d4d0a8ee074c5ca4dd3bc9f5ae7e674c70d671d7 type=CONTAINER_DELETED_EVENT Jun 20 19:37:26.160395 containerd[2800]: time="2025-06-20T19:37:26.160318231Z" level=warning msg="container event discarded" container=9bf5bc6707bad713df23c1c6972b8b6560b852a6429b8dbf37b3a3ea015a73cc type=CONTAINER_DELETED_EVENT Jun 20 19:37:26.716398 containerd[2800]: time="2025-06-20T19:37:26.716346273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"4ed92fcdcee08ec34d9630f0439391c60552116ae65c56e2575fdf8545abcb44\" pid:11175 exited_at:{seconds:1750448246 nanos:716122069}" Jun 20 19:37:30.357069 containerd[2800]: time="2025-06-20T19:37:30.357031740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"1f6b87d87ea20026a6e8e1ca7f4467ab17dc8f13f7392ada4de207bca6319f2c\" pid:11213 exited_at:{seconds:1750448250 nanos:356786136}" Jun 20 19:37:31.142109 containerd[2800]: time="2025-06-20T19:37:31.142061566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"0b0892298623a80ac48ae87b5ddbadd32641566f9ea921d6831bae3c85f980f9\" pid:11247 exited_at:{seconds:1750448251 nanos:141911523}" Jun 20 19:37:41.120628 containerd[2800]: time="2025-06-20T19:37:41.120590201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"d896e21cb04010d6e76ba55d70288c24c57cf81ef952d57a368682cbf2f3e225\" pid:11270 exited_at:{seconds:1750448261 nanos:120360477}" Jun 20 19:37:49.494993 update_engine[2794]: I20250620 19:37:49.494938 2794 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 19:37:49.494993 update_engine[2794]: I20250620 19:37:49.494989 2794 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 19:37:49.495494 update_engine[2794]: I20250620 19:37:49.495220 2794 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 19:37:49.495553 update_engine[2794]: I20250620 19:37:49.495536 2794 omaha_request_params.cc:62] Current group set to beta Jun 20 19:37:49.495624 update_engine[2794]: I20250620 19:37:49.495612 2794 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 19:37:49.495624 update_engine[2794]: I20250620 19:37:49.495620 2794 update_attempter.cc:643] Scheduling an action processor start. Jun 20 19:37:49.495665 update_engine[2794]: I20250620 19:37:49.495635 2794 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:37:49.495665 update_engine[2794]: I20250620 19:37:49.495661 2794 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 19:37:49.495725 update_engine[2794]: I20250620 19:37:49.495713 2794 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:37:49.495745 update_engine[2794]: I20250620 19:37:49.495723 2794 omaha_request_action.cc:272] Request: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: Jun 20 19:37:49.495745 update_engine[2794]: I20250620 19:37:49.495728 2794 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:37:49.496030 locksmithd[2830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 19:37:49.496799 update_engine[2794]: I20250620 19:37:49.496780 2794 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:37:49.497114 update_engine[2794]: I20250620 19:37:49.497093 2794 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:37:49.497473 update_engine[2794]: E20250620 19:37:49.497457 2794 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:37:49.497517 update_engine[2794]: I20250620 19:37:49.497504 2794 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 19:37:51.358768 containerd[2800]: time="2025-06-20T19:37:51.358730438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"551647e903dee8fec058bd6c6a3619b7ba0658834f55526e2b510e0cbb7f01bd\" pid:11317 exited_at:{seconds:1750448271 nanos:358558835}" Jun 20 19:37:59.403036 update_engine[2794]: I20250620 19:37:59.402969 2794 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:37:59.403416 update_engine[2794]: I20250620 19:37:59.403229 2794 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:37:59.403475 update_engine[2794]: I20250620 19:37:59.403455 2794 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:37:59.439992 update_engine[2794]: E20250620 19:37:59.439960 2794 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:37:59.440068 update_engine[2794]: I20250620 19:37:59.440043 2794 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 19:37:59.867052 containerd[2800]: time="2025-06-20T19:37:59.866983439Z" level=warning msg="container event discarded" container=9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe type=CONTAINER_STOPPED_EVENT Jun 20 19:37:59.903234 containerd[2800]: time="2025-06-20T19:37:59.903197668Z" level=warning msg="container event discarded" container=98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 type=CONTAINER_STOPPED_EVENT Jun 20 19:38:00.231889 containerd[2800]: time="2025-06-20T19:38:00.231839199Z" level=warning msg="container event discarded" container=9f7072e208bd28db517cc3b7a36565d6c4618456b42899f3a03345f2508c4ffe type=CONTAINER_DELETED_EVENT Jun 20 19:38:00.357470 containerd[2800]: time="2025-06-20T19:38:00.357442666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"207833c282fd484b4ddba62cd9bb0a873e76412e8509dcbfd66bce1bf45715a8\" pid:11341 exited_at:{seconds:1750448280 nanos:357240502}" Jun 20 19:38:01.150962 containerd[2800]: time="2025-06-20T19:38:01.150927336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"4a557c4ea6e6dfc9e6404e638183346cd093d2aec03c385ba0bf0ceeaf87ca14\" pid:11377 exited_at:{seconds:1750448281 nanos:150733293}" Jun 20 19:38:09.404945 update_engine[2794]: I20250620 19:38:09.404874 2794 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:38:09.405368 update_engine[2794]: I20250620 19:38:09.405162 2794 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:38:09.405395 update_engine[2794]: I20250620 19:38:09.405378 2794 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:38:09.405756 update_engine[2794]: E20250620 19:38:09.405738 2794 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:38:09.405786 update_engine[2794]: I20250620 19:38:09.405772 2794 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 19:38:11.113312 containerd[2800]: time="2025-06-20T19:38:11.113274239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"9c2075efc520b10ef97c3a53b6910b82e816b468a1a7b638c74010d2f7d04624\" pid:11405 exited_at:{seconds:1750448291 nanos:113006834}" Jun 20 19:38:19.405431 update_engine[2794]: I20250620 19:38:19.404919 2794 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:38:19.405431 update_engine[2794]: I20250620 19:38:19.405179 2794 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:38:19.405431 update_engine[2794]: I20250620 19:38:19.405395 2794 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:38:19.405941 update_engine[2794]: E20250620 19:38:19.405918 2794 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:38:19.406036 update_engine[2794]: I20250620 19:38:19.406023 2794 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406072 2794 omaha_request_action.cc:617] Omaha request response: Jun 20 19:38:19.406607 update_engine[2794]: E20250620 19:38:19.406142 2794 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406157 2794 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406162 2794 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406167 2794 update_attempter.cc:306] Processing Done. Jun 20 19:38:19.406607 update_engine[2794]: E20250620 19:38:19.406179 2794 update_attempter.cc:619] Update failed. Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406185 2794 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406189 2794 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406194 2794 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406253 2794 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406273 2794 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406278 2794 omaha_request_action.cc:272] Request: Jun 20 19:38:19.406607 update_engine[2794]: Jun 20 19:38:19.406607 update_engine[2794]: Jun 20 19:38:19.406607 update_engine[2794]: Jun 20 19:38:19.406607 update_engine[2794]: Jun 20 19:38:19.406607 update_engine[2794]: Jun 20 19:38:19.406607 update_engine[2794]: Jun 20 19:38:19.406607 update_engine[2794]: I20250620 19:38:19.406284 2794 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:38:19.407016 locksmithd[2830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 19:38:19.407212 update_engine[2794]: I20250620 19:38:19.406399 2794 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:38:19.407212 update_engine[2794]: I20250620 19:38:19.406576 2794 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:38:19.407446 update_engine[2794]: E20250620 19:38:19.407317 2794 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407356 2794 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407363 2794 omaha_request_action.cc:617] Omaha request response: Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407370 2794 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407375 2794 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407379 2794 update_attempter.cc:306] Processing Done. Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407384 2794 update_attempter.cc:310] Error event sent. Jun 20 19:38:19.407446 update_engine[2794]: I20250620 19:38:19.407392 2794 update_check_scheduler.cc:74] Next update check in 48m33s Jun 20 19:38:19.407664 locksmithd[2830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 19:38:26.287453 containerd[2800]: time="2025-06-20T19:38:26.287363699Z" level=warning msg="container event discarded" container=98b396f8f898b51ea3b60eb76a510ec543b9a36e4acc492024eb9509523ca9e4 type=CONTAINER_DELETED_EVENT Jun 20 19:38:26.712497 containerd[2800]: time="2025-06-20T19:38:26.712460410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"41753c16ca49a386ce869f687bd8276bf0fa098e2a7da2ffb1120a04f982bb2e\" pid:11455 exited_at:{seconds:1750448306 nanos:712233725}" Jun 20 19:38:30.355745 containerd[2800]: time="2025-06-20T19:38:30.355705379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"3fb7c800feeccae640885158139656453633686ac5bc05d4fd28065b90c282ea\" pid:11510 exited_at:{seconds:1750448310 nanos:355441893}" Jun 20 19:38:31.142797 containerd[2800]: time="2025-06-20T19:38:31.142753511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"cce7e5e8b5d05668694f8fde3a6bbc3a5ade68cced2f036d591208c47d532c0c\" pid:11546 exited_at:{seconds:1750448311 nanos:142589068}" Jun 20 19:38:41.121301 containerd[2800]: time="2025-06-20T19:38:41.121264896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"0594d3a57b6a499dcbc4c008884dd642eec23f87d2efa4fb6a93b6ea00ba3271\" pid:11572 exited_at:{seconds:1750448321 nanos:121037532}" Jun 20 19:38:51.355533 containerd[2800]: time="2025-06-20T19:38:51.355492802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"45cd71c53b349658c23e86839f2248571c0eaac1b14138b7b1fbe5ef1ce2872b\" pid:11612 exited_at:{seconds:1750448331 nanos:355333918}" Jun 20 19:39:00.360634 containerd[2800]: time="2025-06-20T19:39:00.360586813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"2fa76e5d39b6a8aa011fc8525f3282337a129c73ce43161002c61ad053d645d1\" pid:11634 exited_at:{seconds:1750448340 nanos:360345808}" Jun 20 19:39:01.153927 containerd[2800]: time="2025-06-20T19:39:01.153895185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"c173e9684094832f0e1bacb0e34a2dd3108ded91a55162a6b4b5e8b4d7fedc14\" pid:11670 exited_at:{seconds:1750448341 nanos:153691661}" Jun 20 19:39:11.122471 containerd[2800]: time="2025-06-20T19:39:11.122426070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"9d1e2129e9a5d5a8513f505077ccb5b5532ec920beda414dff017569889f4f8a\" pid:11696 exited_at:{seconds:1750448351 nanos:122266913}" Jun 20 19:39:26.720006 containerd[2800]: time="2025-06-20T19:39:26.719954112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"df4782ac579ffd8bdb9706922f4aee2df407b5b163452c1f63aaa490b57c68d2\" pid:11737 exited_at:{seconds:1750448366 nanos:719714074}" Jun 20 19:39:30.357506 containerd[2800]: time="2025-06-20T19:39:30.357460866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"9cc227dbb66820a1dfb35f6a4e61c95a374602db77d58dcda4d64aa023135bf2\" pid:11774 exited_at:{seconds:1750448370 nanos:357275187}" Jun 20 19:39:31.152950 containerd[2800]: time="2025-06-20T19:39:31.152913147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"61d64691f0928975902ff1ecfef07dbdd5b32375a2e0f212d9f461955558f7b2\" pid:11810 exited_at:{seconds:1750448371 nanos:152704429}" Jun 20 19:39:38.722933 systemd[1]: Started sshd@7-147.28.145.50:22-147.75.109.163:43068.service - OpenSSH per-connection server daemon (147.75.109.163:43068). Jun 20 19:39:39.128339 sshd[11827]: Accepted publickey for core from 147.75.109.163 port 43068 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:39:39.129564 sshd-session[11827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:39.133294 systemd-logind[2783]: New session 10 of user core. Jun 20 19:39:39.158007 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:39:39.482834 sshd[11829]: Connection closed by 147.75.109.163 port 43068 Jun 20 19:39:39.483187 sshd-session[11827]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:39.486346 systemd[1]: sshd@7-147.28.145.50:22-147.75.109.163:43068.service: Deactivated successfully. Jun 20 19:39:39.488044 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:39:39.488638 systemd-logind[2783]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:39:39.489500 systemd-logind[2783]: Removed session 10. Jun 20 19:39:41.118010 containerd[2800]: time="2025-06-20T19:39:41.117964424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"3e4e3f08b0258d8eeb3ac967818cecc67f6bd34fba079625c898c63fc602c0a7\" pid:11876 exited_at:{seconds:1750448381 nanos:117741986}" Jun 20 19:39:44.574340 systemd[1]: Started sshd@8-147.28.145.50:22-147.75.109.163:43076.service - OpenSSH per-connection server daemon (147.75.109.163:43076). Jun 20 19:39:44.993051 sshd[11904]: Accepted publickey for core from 147.75.109.163 port 43076 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:39:44.994381 sshd-session[11904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:44.997913 systemd-logind[2783]: New session 11 of user core. Jun 20 19:39:45.022023 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:39:45.339832 sshd[11906]: Connection closed by 147.75.109.163 port 43076 Jun 20 19:39:45.340158 sshd-session[11904]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:45.343178 systemd[1]: sshd@8-147.28.145.50:22-147.75.109.163:43076.service: Deactivated successfully. Jun 20 19:39:45.345408 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:39:45.346015 systemd-logind[2783]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:39:45.346841 systemd-logind[2783]: Removed session 11. Jun 20 19:39:45.426167 systemd[1]: Started sshd@9-147.28.145.50:22-147.75.109.163:43080.service - OpenSSH per-connection server daemon (147.75.109.163:43080). Jun 20 19:39:45.828486 sshd[11942]: Accepted publickey for core from 147.75.109.163 port 43080 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:39:45.829824 sshd-session[11942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:45.833254 systemd-logind[2783]: New session 12 of user core. Jun 20 19:39:45.849042 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:39:46.213760 sshd[11944]: Connection closed by 147.75.109.163 port 43080 Jun 20 19:39:46.214174 sshd-session[11942]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:46.217394 systemd[1]: sshd@9-147.28.145.50:22-147.75.109.163:43080.service: Deactivated successfully. Jun 20 19:39:46.219143 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:39:46.219717 systemd-logind[2783]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:39:46.220538 systemd-logind[2783]: Removed session 12. Jun 20 19:39:46.298925 systemd[1]: Started sshd@10-147.28.145.50:22-147.75.109.163:38002.service - OpenSSH per-connection server daemon (147.75.109.163:38002). Jun 20 19:39:46.701073 sshd[11980]: Accepted publickey for core from 147.75.109.163 port 38002 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:39:46.702411 sshd-session[11980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:46.705837 systemd-logind[2783]: New session 13 of user core. Jun 20 19:39:46.718020 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:39:47.047554 sshd[11986]: Connection closed by 147.75.109.163 port 38002 Jun 20 19:39:47.047922 sshd-session[11980]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:47.050897 systemd[1]: sshd@10-147.28.145.50:22-147.75.109.163:38002.service: Deactivated successfully. Jun 20 19:39:47.052600 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:39:47.053217 systemd-logind[2783]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:39:47.054059 systemd-logind[2783]: Removed session 13. Jun 20 19:39:51.363683 containerd[2800]: time="2025-06-20T19:39:51.363613440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"acc2ad8f9ea4871979f5f03ea29ddfd95bf5bf2260204894212890be9483fc5e\" pid:12035 exited_at:{seconds:1750448391 nanos:363408401}" Jun 20 19:39:52.120897 systemd[1]: Started sshd@11-147.28.145.50:22-147.75.109.163:38006.service - OpenSSH per-connection server daemon (147.75.109.163:38006). Jun 20 19:39:52.525187 sshd[12046]: Accepted publickey for core from 147.75.109.163 port 38006 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:39:52.526388 sshd-session[12046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:52.529489 systemd-logind[2783]: New session 14 of user core. Jun 20 19:39:52.539020 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:39:52.875427 sshd[12048]: Connection closed by 147.75.109.163 port 38006 Jun 20 19:39:52.875687 sshd-session[12046]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:52.878734 systemd[1]: sshd@11-147.28.145.50:22-147.75.109.163:38006.service: Deactivated successfully. Jun 20 19:39:52.881047 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:39:52.881661 systemd-logind[2783]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:39:52.882482 systemd-logind[2783]: Removed session 14. Jun 20 19:39:57.958931 systemd[1]: Started sshd@12-147.28.145.50:22-147.75.109.163:36474.service - OpenSSH per-connection server daemon (147.75.109.163:36474). Jun 20 19:39:58.363742 sshd[12090]: Accepted publickey for core from 147.75.109.163 port 36474 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:39:58.364964 sshd-session[12090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:58.368306 systemd-logind[2783]: New session 15 of user core. Jun 20 19:39:58.392959 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:39:58.712490 sshd[12109]: Connection closed by 147.75.109.163 port 36474 Jun 20 19:39:58.712857 sshd-session[12090]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:58.715960 systemd[1]: sshd@12-147.28.145.50:22-147.75.109.163:36474.service: Deactivated successfully. Jun 20 19:39:58.717703 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:39:58.718315 systemd-logind[2783]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:39:58.719179 systemd-logind[2783]: Removed session 15. Jun 20 19:40:00.363298 containerd[2800]: time="2025-06-20T19:40:00.363254879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b272fbd7ec685a540e152b60f7314aa439f3f619bcf2dc651a2a8ad27aa74d\" id:\"3f7c1c39730131baa8e05efbca4d1089ef38d117b1d146f1334a50b02ab3f637\" pid:12154 exited_at:{seconds:1750448400 nanos:363023079}" Jun 20 19:40:01.149672 containerd[2800]: time="2025-06-20T19:40:01.149633260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4298b21b5438000110b7ecb4f054852a44f7f4844544e1d162658b6fd3bd0b85\" id:\"0dde0a5aa39bbb086144d5f97db166e872c98724eaa87d1c7a3088a5934d9bee\" pid:12189 exited_at:{seconds:1750448401 nanos:149475741}" Jun 20 19:40:03.788924 systemd[1]: Started sshd@13-147.28.145.50:22-147.75.109.163:36482.service - OpenSSH per-connection server daemon (147.75.109.163:36482). Jun 20 19:40:04.205339 sshd[12204]: Accepted publickey for core from 147.75.109.163 port 36482 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:04.206652 sshd-session[12204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:04.209829 systemd-logind[2783]: New session 16 of user core. Jun 20 19:40:04.221012 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:40:04.554601 sshd[12206]: Connection closed by 147.75.109.163 port 36482 Jun 20 19:40:04.554984 sshd-session[12204]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:04.558162 systemd[1]: sshd@13-147.28.145.50:22-147.75.109.163:36482.service: Deactivated successfully. Jun 20 19:40:04.559895 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:40:04.560478 systemd-logind[2783]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:40:04.561340 systemd-logind[2783]: Removed session 16. Jun 20 19:40:04.628866 systemd[1]: Started sshd@14-147.28.145.50:22-147.75.109.163:36486.service - OpenSSH per-connection server daemon (147.75.109.163:36486). Jun 20 19:40:05.051356 sshd[12239]: Accepted publickey for core from 147.75.109.163 port 36486 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:05.052674 sshd-session[12239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:05.056116 systemd-logind[2783]: New session 17 of user core. Jun 20 19:40:05.075015 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:40:05.441915 sshd[12241]: Connection closed by 147.75.109.163 port 36486 Jun 20 19:40:05.442206 sshd-session[12239]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:05.445276 systemd[1]: sshd@14-147.28.145.50:22-147.75.109.163:36486.service: Deactivated successfully. Jun 20 19:40:05.447544 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:40:05.448138 systemd-logind[2783]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:40:05.448952 systemd-logind[2783]: Removed session 17. Jun 20 19:40:05.524822 systemd[1]: Started sshd@15-147.28.145.50:22-147.75.109.163:36490.service - OpenSSH per-connection server daemon (147.75.109.163:36490). Jun 20 19:40:05.929692 sshd[12271]: Accepted publickey for core from 147.75.109.163 port 36490 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:05.931200 sshd-session[12271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:05.934633 systemd-logind[2783]: New session 18 of user core. Jun 20 19:40:05.957963 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:40:06.636538 sshd[12273]: Connection closed by 147.75.109.163 port 36490 Jun 20 19:40:06.636938 sshd-session[12271]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:06.640005 systemd[1]: sshd@15-147.28.145.50:22-147.75.109.163:36490.service: Deactivated successfully. Jun 20 19:40:06.642279 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:40:06.642924 systemd-logind[2783]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:40:06.643755 systemd-logind[2783]: Removed session 18. Jun 20 19:40:06.720937 systemd[1]: Started sshd@16-147.28.145.50:22-147.75.109.163:57542.service - OpenSSH per-connection server daemon (147.75.109.163:57542). Jun 20 19:40:07.127796 sshd[12325]: Accepted publickey for core from 147.75.109.163 port 57542 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:07.129075 sshd-session[12325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:07.132343 systemd-logind[2783]: New session 19 of user core. Jun 20 19:40:07.154983 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:40:07.567243 sshd[12327]: Connection closed by 147.75.109.163 port 57542 Jun 20 19:40:07.567575 sshd-session[12325]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:07.570733 systemd[1]: sshd@16-147.28.145.50:22-147.75.109.163:57542.service: Deactivated successfully. Jun 20 19:40:07.573012 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:40:07.573649 systemd-logind[2783]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:40:07.574452 systemd-logind[2783]: Removed session 19. Jun 20 19:40:07.639896 systemd[1]: Started sshd@17-147.28.145.50:22-147.75.109.163:57550.service - OpenSSH per-connection server daemon (147.75.109.163:57550). Jun 20 19:40:08.046963 sshd[12373]: Accepted publickey for core from 147.75.109.163 port 57550 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:08.048209 sshd-session[12373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:08.051403 systemd-logind[2783]: New session 20 of user core. Jun 20 19:40:08.068962 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:40:08.397437 sshd[12375]: Connection closed by 147.75.109.163 port 57550 Jun 20 19:40:08.397750 sshd-session[12373]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:08.400722 systemd[1]: sshd@17-147.28.145.50:22-147.75.109.163:57550.service: Deactivated successfully. Jun 20 19:40:08.403040 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:40:08.403689 systemd-logind[2783]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:40:08.404528 systemd-logind[2783]: Removed session 20. Jun 20 19:40:11.120994 containerd[2800]: time="2025-06-20T19:40:11.120943795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65b1a32b57ccca5151c086cfcd273862ce918421d4f7ab4a569e99ad14564722\" id:\"d45feb1e989ecc19510125f35022e9ebb00568dea80ff3ec7e978a503ef2ade9\" pid:12420 exited_at:{seconds:1750448411 nanos:120687676}" Jun 20 19:40:13.473981 systemd[1]: Started sshd@18-147.28.145.50:22-147.75.109.163:57560.service - OpenSSH per-connection server daemon (147.75.109.163:57560). Jun 20 19:40:13.876919 sshd[12449]: Accepted publickey for core from 147.75.109.163 port 57560 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:13.878213 sshd-session[12449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:13.881686 systemd-logind[2783]: New session 21 of user core. Jun 20 19:40:13.904989 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:40:14.239161 sshd[12451]: Connection closed by 147.75.109.163 port 57560 Jun 20 19:40:14.239516 sshd-session[12449]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:14.242679 systemd[1]: sshd@18-147.28.145.50:22-147.75.109.163:57560.service: Deactivated successfully. Jun 20 19:40:14.244430 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:40:14.245054 systemd-logind[2783]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:40:14.245890 systemd-logind[2783]: Removed session 21. Jun 20 19:40:19.320979 systemd[1]: Started sshd@19-147.28.145.50:22-147.75.109.163:44698.service - OpenSSH per-connection server daemon (147.75.109.163:44698). Jun 20 19:40:19.728694 sshd[12480]: Accepted publickey for core from 147.75.109.163 port 44698 ssh2: RSA SHA256:9MMLhWBw0F02HPZYteyFuv3FtbRTPW414djl/9otVmI Jun 20 19:40:19.730016 sshd-session[12480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:40:19.733399 systemd-logind[2783]: New session 22 of user core. Jun 20 19:40:19.752024 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:40:20.076996 sshd[12482]: Connection closed by 147.75.109.163 port 44698 Jun 20 19:40:20.077315 sshd-session[12480]: pam_unix(sshd:session): session closed for user core Jun 20 19:40:20.080445 systemd[1]: sshd@19-147.28.145.50:22-147.75.109.163:44698.service: Deactivated successfully. Jun 20 19:40:20.082161 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:40:20.082742 systemd-logind[2783]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:40:20.083589 systemd-logind[2783]: Removed session 22.