Jul 7 02:14:40.323343 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jul 7 02:14:40.323365 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 7 02:14:40.323374 kernel: KASLR enabled Jul 7 02:14:40.323379 kernel: efi: EFI v2.7 by American Megatrends Jul 7 02:14:40.323385 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf10018 MEMRESERVE=0xe465bf98 Jul 7 02:14:40.323390 kernel: random: crng init done Jul 7 02:14:40.323397 kernel: secureboot: Secure boot disabled Jul 7 02:14:40.323402 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. Jul 7 02:14:40.323410 kernel: ACPI: Early table checksum verification disabled Jul 7 02:14:40.323415 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Jul 7 02:14:40.323421 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jul 7 02:14:40.323427 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jul 7 02:14:40.323432 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Jul 7 02:14:40.323438 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jul 7 02:14:40.323446 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jul 7 02:14:40.323452 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jul 7 02:14:40.323458 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 02:14:40.323464 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jul 7 02:14:40.323470 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 02:14:40.323476 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jul 7 02:14:40.323482 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jul 7 02:14:40.323488 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jul 7 02:14:40.323494 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jul 7 02:14:40.323500 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jul 7 02:14:40.323508 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jul 7 02:14:40.323514 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jul 7 02:14:40.323520 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 02:14:40.323526 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jul 7 02:14:40.323532 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jul 7 02:14:40.323538 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 7 02:14:40.323544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jul 7 02:14:40.323550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jul 7 02:14:40.323556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jul 7 02:14:40.323562 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jul 7 02:14:40.323568 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 02:14:40.323576 kernel: NUMA: Node 0 [mem 0x88300000-0x883fffff] + [mem 0x90000000-0xffffffff] -> [mem 0x88300000-0xffffffff] Jul 7 02:14:40.323582 kernel: NUMA: Node 0 [mem 0x88300000-0xffffffff] + [mem 0x80000000000-0x8007fffffff] -> [mem 0x88300000-0x8007fffffff] Jul 7 02:14:40.323588 kernel: NUMA: Node 0 [mem 0x88300000-0x8007fffffff] + [mem 0x80100000000-0x83fffffffff] -> [mem 0x88300000-0x83fffffffff] Jul 7 02:14:40.323594 kernel: NODE_DATA(0) allocated [mem 0x83fdffd8a00-0x83fdffdffff] Jul 7 02:14:40.323600 kernel: Zone ranges: Jul 7 02:14:40.323609 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jul 7 02:14:40.323616 kernel: DMA32 empty Jul 7 02:14:40.323623 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jul 7 02:14:40.323629 kernel: Device empty Jul 7 02:14:40.323635 kernel: Movable zone start for each node Jul 7 02:14:40.323642 kernel: Early memory node ranges Jul 7 02:14:40.323648 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jul 7 02:14:40.323654 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jul 7 02:14:40.323661 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jul 7 02:14:40.323667 kernel: node 0: [mem 0x0000000094000000-0x00000000eba34fff] Jul 7 02:14:40.323673 kernel: node 0: [mem 0x00000000eba35000-0x00000000ebec6fff] Jul 7 02:14:40.323681 kernel: node 0: [mem 0x00000000ebec7000-0x00000000ebec9fff] Jul 7 02:14:40.323694 kernel: node 0: [mem 0x00000000ebeca000-0x00000000ebeccfff] Jul 7 02:14:40.323700 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Jul 7 02:14:40.323707 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Jul 7 02:14:40.323713 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Jul 7 02:14:40.323719 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Jul 7 02:14:40.323726 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Jul 7 02:14:40.323732 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Jul 7 02:14:40.323738 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Jul 7 02:14:40.323745 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jul 7 02:14:40.323751 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jul 7 02:14:40.323757 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jul 7 02:14:40.323765 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jul 7 02:14:40.323772 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jul 7 02:14:40.323778 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jul 7 02:14:40.323785 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jul 7 02:14:40.323791 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jul 7 02:14:40.323797 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jul 7 02:14:40.323804 kernel: cma: Reserved 16 MiB at 0x00000000fec00000 on node -1 Jul 7 02:14:40.323811 kernel: psci: probing for conduit method from ACPI. Jul 7 02:14:40.323817 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 02:14:40.323824 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 02:14:40.323830 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 02:14:40.323837 kernel: psci: SMC Calling Convention v1.2 Jul 7 02:14:40.323844 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 02:14:40.323850 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jul 7 02:14:40.323857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jul 7 02:14:40.323863 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jul 7 02:14:40.323870 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jul 7 02:14:40.323876 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jul 7 02:14:40.323882 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jul 7 02:14:40.323889 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jul 7 02:14:40.323895 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jul 7 02:14:40.323901 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jul 7 02:14:40.323908 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jul 7 02:14:40.323915 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jul 7 02:14:40.323922 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jul 7 02:14:40.323929 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jul 7 02:14:40.323935 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jul 7 02:14:40.323942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jul 7 02:14:40.323948 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jul 7 02:14:40.323955 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jul 7 02:14:40.323961 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jul 7 02:14:40.323967 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jul 7 02:14:40.323973 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jul 7 02:14:40.323980 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jul 7 02:14:40.323986 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jul 7 02:14:40.323994 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jul 7 02:14:40.324000 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jul 7 02:14:40.324006 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jul 7 02:14:40.324013 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jul 7 02:14:40.324019 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jul 7 02:14:40.324025 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jul 7 02:14:40.324032 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jul 7 02:14:40.324038 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jul 7 02:14:40.324045 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jul 7 02:14:40.324051 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jul 7 02:14:40.324058 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jul 7 02:14:40.324066 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jul 7 02:14:40.324072 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jul 7 02:14:40.324078 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jul 7 02:14:40.324085 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jul 7 02:14:40.324091 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jul 7 02:14:40.324097 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jul 7 02:14:40.324103 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jul 7 02:14:40.324110 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jul 7 02:14:40.324116 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jul 7 02:14:40.324128 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jul 7 02:14:40.324136 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jul 7 02:14:40.324143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jul 7 02:14:40.324150 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jul 7 02:14:40.324157 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jul 7 02:14:40.324163 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jul 7 02:14:40.324170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jul 7 02:14:40.324178 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jul 7 02:14:40.324185 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jul 7 02:14:40.324192 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jul 7 02:14:40.324199 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jul 7 02:14:40.324205 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jul 7 02:14:40.324212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jul 7 02:14:40.324219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jul 7 02:14:40.324225 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jul 7 02:14:40.324232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jul 7 02:14:40.324239 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jul 7 02:14:40.324245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jul 7 02:14:40.324252 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jul 7 02:14:40.324260 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jul 7 02:14:40.324267 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jul 7 02:14:40.324274 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jul 7 02:14:40.324280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jul 7 02:14:40.324287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jul 7 02:14:40.324294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jul 7 02:14:40.324301 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jul 7 02:14:40.324308 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jul 7 02:14:40.324314 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jul 7 02:14:40.324321 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jul 7 02:14:40.324328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jul 7 02:14:40.324335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jul 7 02:14:40.324343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jul 7 02:14:40.324350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jul 7 02:14:40.324356 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jul 7 02:14:40.324363 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jul 7 02:14:40.324370 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jul 7 02:14:40.324377 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jul 7 02:14:40.324383 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 7 02:14:40.324390 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 7 02:14:40.324397 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jul 7 02:14:40.324404 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jul 7 02:14:40.324410 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jul 7 02:14:40.324418 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jul 7 02:14:40.324425 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jul 7 02:14:40.324432 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jul 7 02:14:40.324439 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jul 7 02:14:40.324445 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jul 7 02:14:40.324452 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jul 7 02:14:40.324459 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jul 7 02:14:40.324465 kernel: Detected PIPT I-cache on CPU0 Jul 7 02:14:40.324472 kernel: CPU features: detected: GIC system register CPU interface Jul 7 02:14:40.324479 kernel: CPU features: detected: Virtualization Host Extensions Jul 7 02:14:40.324486 kernel: CPU features: detected: Spectre-v4 Jul 7 02:14:40.324494 kernel: CPU features: detected: Spectre-BHB Jul 7 02:14:40.324501 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 02:14:40.324508 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 02:14:40.324514 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 02:14:40.324521 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 02:14:40.324528 kernel: alternatives: applying boot alternatives Jul 7 02:14:40.324536 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 7 02:14:40.324543 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 02:14:40.324550 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 7 02:14:40.324557 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jul 7 02:14:40.324564 kernel: printk: log_buf_len min size: 262144 bytes Jul 7 02:14:40.324572 kernel: printk: log_buf_len: 1048576 bytes Jul 7 02:14:40.324578 kernel: printk: early log buf free: 249376(95%) Jul 7 02:14:40.324585 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jul 7 02:14:40.324592 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jul 7 02:14:40.324599 kernel: Fallback order for Node 0: 0 Jul 7 02:14:40.324606 kernel: Built 1 zonelists, mobility grouping on. Total pages: 67043584 Jul 7 02:14:40.324613 kernel: Policy zone: Normal Jul 7 02:14:40.324619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 02:14:40.324626 kernel: software IO TLB: area num 128. Jul 7 02:14:40.324633 kernel: software IO TLB: mapped [mem 0x00000000fac00000-0x00000000fec00000] (64MB) Jul 7 02:14:40.324640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jul 7 02:14:40.324648 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 02:14:40.324655 kernel: rcu: RCU event tracing is enabled. Jul 7 02:14:40.324662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jul 7 02:14:40.324669 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 02:14:40.324676 kernel: Tracing variant of Tasks RCU enabled. Jul 7 02:14:40.324685 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 02:14:40.324692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jul 7 02:14:40.324699 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 7 02:14:40.324706 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 7 02:14:40.324713 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 02:14:40.324719 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jul 7 02:14:40.324726 kernel: GICv3: 672 SPIs implemented Jul 7 02:14:40.324734 kernel: GICv3: 0 Extended SPIs implemented Jul 7 02:14:40.324741 kernel: Root IRQ handler: gic_handle_irq Jul 7 02:14:40.324747 kernel: GICv3: GICv3 features: 16 PPIs Jul 7 02:14:40.324754 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=1 Jul 7 02:14:40.324761 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jul 7 02:14:40.324768 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jul 7 02:14:40.324774 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jul 7 02:14:40.324781 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jul 7 02:14:40.324788 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jul 7 02:14:40.324794 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jul 7 02:14:40.324801 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jul 7 02:14:40.324808 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jul 7 02:14:40.324815 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jul 7 02:14:40.324822 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jul 7 02:14:40.324829 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000340000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324836 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000350000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324843 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jul 7 02:14:40.324850 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000370000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324857 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000380000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324864 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jul 7 02:14:40.324871 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800003a0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324878 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800003b0000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324884 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jul 7 02:14:40.324893 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @800003d0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324900 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800003e0000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324906 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jul 7 02:14:40.324913 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000800000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324920 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000810000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324927 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jul 7 02:14:40.324934 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000830000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324941 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000840000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324947 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jul 7 02:14:40.324954 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000860000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324961 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000870000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324969 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jul 7 02:14:40.324977 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000890000 (indirect, esz 8, psz 64K, shr 1) Jul 7 02:14:40.324984 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800008a0000 (flat, esz 2, psz 64K, shr 1) Jul 7 02:14:40.324990 kernel: GICv3: using LPI property table @0x00000800008b0000 Jul 7 02:14:40.324997 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800008c0000 Jul 7 02:14:40.325004 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 02:14:40.325011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325018 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jul 7 02:14:40.325024 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jul 7 02:14:40.325031 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 02:14:40.325038 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 02:14:40.325047 kernel: Console: colour dummy device 80x25 Jul 7 02:14:40.325054 kernel: printk: legacy console [tty0] enabled Jul 7 02:14:40.325061 kernel: ACPI: Core revision 20240827 Jul 7 02:14:40.325068 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 02:14:40.325075 kernel: pid_max: default: 81920 minimum: 640 Jul 7 02:14:40.325082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 02:14:40.325089 kernel: landlock: Up and running. Jul 7 02:14:40.325096 kernel: SELinux: Initializing. Jul 7 02:14:40.325103 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.325110 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.325118 kernel: rcu: Hierarchical SRCU implementation. Jul 7 02:14:40.325125 kernel: rcu: Max phase no-delay instances is 400. Jul 7 02:14:40.325132 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 7 02:14:40.325139 kernel: Remapping and enabling EFI services. Jul 7 02:14:40.325146 kernel: smp: Bringing up secondary CPUs ... Jul 7 02:14:40.325153 kernel: Detected PIPT I-cache on CPU1 Jul 7 02:14:40.325160 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jul 7 02:14:40.325168 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000800008d0000 Jul 7 02:14:40.325174 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325183 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jul 7 02:14:40.325190 kernel: Detected PIPT I-cache on CPU2 Jul 7 02:14:40.325197 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jul 7 02:14:40.325204 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800008e0000 Jul 7 02:14:40.325211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325218 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jul 7 02:14:40.325225 kernel: Detected PIPT I-cache on CPU3 Jul 7 02:14:40.325232 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jul 7 02:14:40.325239 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800008f0000 Jul 7 02:14:40.325247 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325254 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jul 7 02:14:40.325261 kernel: Detected PIPT I-cache on CPU4 Jul 7 02:14:40.325268 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jul 7 02:14:40.325275 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000900000 Jul 7 02:14:40.325282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325288 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jul 7 02:14:40.325295 kernel: Detected PIPT I-cache on CPU5 Jul 7 02:14:40.325302 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jul 7 02:14:40.325309 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000910000 Jul 7 02:14:40.325318 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325325 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jul 7 02:14:40.325331 kernel: Detected PIPT I-cache on CPU6 Jul 7 02:14:40.325338 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jul 7 02:14:40.325345 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000920000 Jul 7 02:14:40.325352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325359 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jul 7 02:14:40.325366 kernel: Detected PIPT I-cache on CPU7 Jul 7 02:14:40.325373 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jul 7 02:14:40.325381 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000930000 Jul 7 02:14:40.325388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325395 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jul 7 02:14:40.325402 kernel: Detected PIPT I-cache on CPU8 Jul 7 02:14:40.325409 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jul 7 02:14:40.325416 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000940000 Jul 7 02:14:40.325422 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325429 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jul 7 02:14:40.325436 kernel: Detected PIPT I-cache on CPU9 Jul 7 02:14:40.325443 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jul 7 02:14:40.325451 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000950000 Jul 7 02:14:40.325458 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325465 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jul 7 02:14:40.325472 kernel: Detected PIPT I-cache on CPU10 Jul 7 02:14:40.325479 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jul 7 02:14:40.325486 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000960000 Jul 7 02:14:40.325493 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325500 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jul 7 02:14:40.325507 kernel: Detected PIPT I-cache on CPU11 Jul 7 02:14:40.325514 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jul 7 02:14:40.325522 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000970000 Jul 7 02:14:40.325529 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325535 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jul 7 02:14:40.325542 kernel: Detected PIPT I-cache on CPU12 Jul 7 02:14:40.325549 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jul 7 02:14:40.325556 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000980000 Jul 7 02:14:40.325563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325570 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jul 7 02:14:40.325577 kernel: Detected PIPT I-cache on CPU13 Jul 7 02:14:40.325585 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jul 7 02:14:40.325592 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000990000 Jul 7 02:14:40.325600 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325606 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jul 7 02:14:40.325613 kernel: Detected PIPT I-cache on CPU14 Jul 7 02:14:40.325620 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jul 7 02:14:40.325627 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800009a0000 Jul 7 02:14:40.325634 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325641 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jul 7 02:14:40.325649 kernel: Detected PIPT I-cache on CPU15 Jul 7 02:14:40.325656 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jul 7 02:14:40.325663 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800009b0000 Jul 7 02:14:40.325670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325677 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jul 7 02:14:40.325687 kernel: Detected PIPT I-cache on CPU16 Jul 7 02:14:40.325695 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jul 7 02:14:40.325702 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800009c0000 Jul 7 02:14:40.325709 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325715 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jul 7 02:14:40.325724 kernel: Detected PIPT I-cache on CPU17 Jul 7 02:14:40.325731 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jul 7 02:14:40.325738 kernel: GICv3: CPU17: using allocated LPI pending table @0x00000800009d0000 Jul 7 02:14:40.325746 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325753 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jul 7 02:14:40.325760 kernel: Detected PIPT I-cache on CPU18 Jul 7 02:14:40.325767 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jul 7 02:14:40.325783 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800009e0000 Jul 7 02:14:40.325791 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325800 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jul 7 02:14:40.325807 kernel: Detected PIPT I-cache on CPU19 Jul 7 02:14:40.325814 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jul 7 02:14:40.325821 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800009f0000 Jul 7 02:14:40.325829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325836 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jul 7 02:14:40.325843 kernel: Detected PIPT I-cache on CPU20 Jul 7 02:14:40.325850 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jul 7 02:14:40.325858 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000a00000 Jul 7 02:14:40.325866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325874 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jul 7 02:14:40.325881 kernel: Detected PIPT I-cache on CPU21 Jul 7 02:14:40.325889 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jul 7 02:14:40.325897 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000a10000 Jul 7 02:14:40.325905 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325914 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jul 7 02:14:40.325921 kernel: Detected PIPT I-cache on CPU22 Jul 7 02:14:40.325928 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jul 7 02:14:40.325936 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000a20000 Jul 7 02:14:40.325943 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325950 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jul 7 02:14:40.325957 kernel: Detected PIPT I-cache on CPU23 Jul 7 02:14:40.325965 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jul 7 02:14:40.325972 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000a30000 Jul 7 02:14:40.325979 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.325988 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jul 7 02:14:40.325996 kernel: Detected PIPT I-cache on CPU24 Jul 7 02:14:40.326003 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jul 7 02:14:40.326010 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000a40000 Jul 7 02:14:40.326017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326025 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jul 7 02:14:40.326032 kernel: Detected PIPT I-cache on CPU25 Jul 7 02:14:40.326039 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jul 7 02:14:40.326047 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000a50000 Jul 7 02:14:40.326055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326062 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jul 7 02:14:40.326070 kernel: Detected PIPT I-cache on CPU26 Jul 7 02:14:40.326077 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jul 7 02:14:40.326084 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000a60000 Jul 7 02:14:40.326092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326099 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jul 7 02:14:40.326107 kernel: Detected PIPT I-cache on CPU27 Jul 7 02:14:40.326114 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jul 7 02:14:40.326121 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000a70000 Jul 7 02:14:40.326130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326137 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jul 7 02:14:40.326144 kernel: Detected PIPT I-cache on CPU28 Jul 7 02:14:40.326151 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jul 7 02:14:40.326159 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000a80000 Jul 7 02:14:40.326166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326173 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jul 7 02:14:40.326180 kernel: Detected PIPT I-cache on CPU29 Jul 7 02:14:40.326188 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jul 7 02:14:40.326196 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000a90000 Jul 7 02:14:40.326204 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326211 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jul 7 02:14:40.326218 kernel: Detected PIPT I-cache on CPU30 Jul 7 02:14:40.326225 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jul 7 02:14:40.326233 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000aa0000 Jul 7 02:14:40.326240 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326248 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jul 7 02:14:40.326255 kernel: Detected PIPT I-cache on CPU31 Jul 7 02:14:40.326262 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jul 7 02:14:40.326271 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000ab0000 Jul 7 02:14:40.326278 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326285 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jul 7 02:14:40.326293 kernel: Detected PIPT I-cache on CPU32 Jul 7 02:14:40.326300 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jul 7 02:14:40.326307 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000ac0000 Jul 7 02:14:40.326315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326322 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jul 7 02:14:40.326329 kernel: Detected PIPT I-cache on CPU33 Jul 7 02:14:40.326338 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jul 7 02:14:40.326345 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000ad0000 Jul 7 02:14:40.326352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326360 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jul 7 02:14:40.326367 kernel: Detected PIPT I-cache on CPU34 Jul 7 02:14:40.326374 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jul 7 02:14:40.326381 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000ae0000 Jul 7 02:14:40.326389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326396 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jul 7 02:14:40.326405 kernel: Detected PIPT I-cache on CPU35 Jul 7 02:14:40.326412 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jul 7 02:14:40.326419 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000af0000 Jul 7 02:14:40.326427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326434 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jul 7 02:14:40.326441 kernel: Detected PIPT I-cache on CPU36 Jul 7 02:14:40.326449 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jul 7 02:14:40.326456 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000b00000 Jul 7 02:14:40.326463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326470 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jul 7 02:14:40.326479 kernel: Detected PIPT I-cache on CPU37 Jul 7 02:14:40.326486 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jul 7 02:14:40.326494 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000b10000 Jul 7 02:14:40.326501 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326508 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jul 7 02:14:40.326515 kernel: Detected PIPT I-cache on CPU38 Jul 7 02:14:40.326523 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jul 7 02:14:40.326530 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000b20000 Jul 7 02:14:40.326537 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326546 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jul 7 02:14:40.326553 kernel: Detected PIPT I-cache on CPU39 Jul 7 02:14:40.326560 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jul 7 02:14:40.326567 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000b30000 Jul 7 02:14:40.326575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326582 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jul 7 02:14:40.326589 kernel: Detected PIPT I-cache on CPU40 Jul 7 02:14:40.326596 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jul 7 02:14:40.326605 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000b40000 Jul 7 02:14:40.326612 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326619 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jul 7 02:14:40.326627 kernel: Detected PIPT I-cache on CPU41 Jul 7 02:14:40.326634 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jul 7 02:14:40.326641 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000b50000 Jul 7 02:14:40.326649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326656 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jul 7 02:14:40.326663 kernel: Detected PIPT I-cache on CPU42 Jul 7 02:14:40.326670 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jul 7 02:14:40.326679 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000b60000 Jul 7 02:14:40.326689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326696 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jul 7 02:14:40.326704 kernel: Detected PIPT I-cache on CPU43 Jul 7 02:14:40.326711 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jul 7 02:14:40.326719 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000b70000 Jul 7 02:14:40.326726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326733 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jul 7 02:14:40.326740 kernel: Detected PIPT I-cache on CPU44 Jul 7 02:14:40.326749 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jul 7 02:14:40.326757 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000b80000 Jul 7 02:14:40.326764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326771 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jul 7 02:14:40.326779 kernel: Detected PIPT I-cache on CPU45 Jul 7 02:14:40.326786 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jul 7 02:14:40.326793 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000b90000 Jul 7 02:14:40.326801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326808 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jul 7 02:14:40.326815 kernel: Detected PIPT I-cache on CPU46 Jul 7 02:14:40.326825 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jul 7 02:14:40.326832 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ba0000 Jul 7 02:14:40.326841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326848 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jul 7 02:14:40.326855 kernel: Detected PIPT I-cache on CPU47 Jul 7 02:14:40.326863 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jul 7 02:14:40.326870 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000bb0000 Jul 7 02:14:40.326878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326885 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jul 7 02:14:40.326893 kernel: Detected PIPT I-cache on CPU48 Jul 7 02:14:40.326901 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jul 7 02:14:40.326908 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000bc0000 Jul 7 02:14:40.326915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326922 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jul 7 02:14:40.326930 kernel: Detected PIPT I-cache on CPU49 Jul 7 02:14:40.326937 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jul 7 02:14:40.326944 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000bd0000 Jul 7 02:14:40.326952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326959 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jul 7 02:14:40.326967 kernel: Detected PIPT I-cache on CPU50 Jul 7 02:14:40.326974 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jul 7 02:14:40.326982 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000be0000 Jul 7 02:14:40.326989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.326996 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jul 7 02:14:40.327003 kernel: Detected PIPT I-cache on CPU51 Jul 7 02:14:40.327011 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jul 7 02:14:40.327018 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000bf0000 Jul 7 02:14:40.327025 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327034 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jul 7 02:14:40.327041 kernel: Detected PIPT I-cache on CPU52 Jul 7 02:14:40.327048 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jul 7 02:14:40.327056 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000c00000 Jul 7 02:14:40.327063 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327070 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jul 7 02:14:40.327078 kernel: Detected PIPT I-cache on CPU53 Jul 7 02:14:40.327085 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jul 7 02:14:40.327092 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000c10000 Jul 7 02:14:40.327101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327108 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jul 7 02:14:40.327115 kernel: Detected PIPT I-cache on CPU54 Jul 7 02:14:40.327123 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jul 7 02:14:40.327130 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000c20000 Jul 7 02:14:40.327137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327144 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jul 7 02:14:40.327152 kernel: Detected PIPT I-cache on CPU55 Jul 7 02:14:40.327159 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jul 7 02:14:40.327166 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000c30000 Jul 7 02:14:40.327175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327182 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jul 7 02:14:40.327189 kernel: Detected PIPT I-cache on CPU56 Jul 7 02:14:40.327197 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jul 7 02:14:40.327204 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000c40000 Jul 7 02:14:40.327212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327219 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jul 7 02:14:40.327226 kernel: Detected PIPT I-cache on CPU57 Jul 7 02:14:40.327233 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jul 7 02:14:40.327241 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000c50000 Jul 7 02:14:40.327249 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327256 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jul 7 02:14:40.327263 kernel: Detected PIPT I-cache on CPU58 Jul 7 02:14:40.327270 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jul 7 02:14:40.327278 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000c60000 Jul 7 02:14:40.327285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327292 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jul 7 02:14:40.327299 kernel: Detected PIPT I-cache on CPU59 Jul 7 02:14:40.327307 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jul 7 02:14:40.327315 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000c70000 Jul 7 02:14:40.327323 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327330 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jul 7 02:14:40.327337 kernel: Detected PIPT I-cache on CPU60 Jul 7 02:14:40.327344 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jul 7 02:14:40.327352 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000c80000 Jul 7 02:14:40.327359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327366 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jul 7 02:14:40.327374 kernel: Detected PIPT I-cache on CPU61 Jul 7 02:14:40.327383 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jul 7 02:14:40.327390 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000c90000 Jul 7 02:14:40.327397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327404 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jul 7 02:14:40.327412 kernel: Detected PIPT I-cache on CPU62 Jul 7 02:14:40.327419 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jul 7 02:14:40.327426 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000ca0000 Jul 7 02:14:40.327434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327441 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jul 7 02:14:40.327448 kernel: Detected PIPT I-cache on CPU63 Jul 7 02:14:40.327456 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jul 7 02:14:40.327464 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000cb0000 Jul 7 02:14:40.327471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327478 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jul 7 02:14:40.327485 kernel: Detected PIPT I-cache on CPU64 Jul 7 02:14:40.327493 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jul 7 02:14:40.327500 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000cc0000 Jul 7 02:14:40.327507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327515 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jul 7 02:14:40.327523 kernel: Detected PIPT I-cache on CPU65 Jul 7 02:14:40.327531 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jul 7 02:14:40.327538 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000cd0000 Jul 7 02:14:40.327546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327553 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jul 7 02:14:40.327560 kernel: Detected PIPT I-cache on CPU66 Jul 7 02:14:40.327567 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jul 7 02:14:40.327574 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000ce0000 Jul 7 02:14:40.327582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327589 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jul 7 02:14:40.327597 kernel: Detected PIPT I-cache on CPU67 Jul 7 02:14:40.327605 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jul 7 02:14:40.327612 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000cf0000 Jul 7 02:14:40.327620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327627 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jul 7 02:14:40.327634 kernel: Detected PIPT I-cache on CPU68 Jul 7 02:14:40.327642 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jul 7 02:14:40.327649 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000d00000 Jul 7 02:14:40.327656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327665 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jul 7 02:14:40.327673 kernel: Detected PIPT I-cache on CPU69 Jul 7 02:14:40.327680 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jul 7 02:14:40.327689 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000d10000 Jul 7 02:14:40.327697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327704 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jul 7 02:14:40.327711 kernel: Detected PIPT I-cache on CPU70 Jul 7 02:14:40.327718 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jul 7 02:14:40.327726 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000d20000 Jul 7 02:14:40.327734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327742 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jul 7 02:14:40.327749 kernel: Detected PIPT I-cache on CPU71 Jul 7 02:14:40.327756 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jul 7 02:14:40.327763 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000d30000 Jul 7 02:14:40.327771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327778 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jul 7 02:14:40.327785 kernel: Detected PIPT I-cache on CPU72 Jul 7 02:14:40.327793 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jul 7 02:14:40.327800 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000d40000 Jul 7 02:14:40.327809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327816 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jul 7 02:14:40.327824 kernel: Detected PIPT I-cache on CPU73 Jul 7 02:14:40.327831 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jul 7 02:14:40.327838 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000d50000 Jul 7 02:14:40.327846 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327853 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jul 7 02:14:40.327860 kernel: Detected PIPT I-cache on CPU74 Jul 7 02:14:40.327868 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jul 7 02:14:40.327876 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000d60000 Jul 7 02:14:40.327883 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327891 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jul 7 02:14:40.327898 kernel: Detected PIPT I-cache on CPU75 Jul 7 02:14:40.327905 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jul 7 02:14:40.327913 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000d70000 Jul 7 02:14:40.327920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327928 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jul 7 02:14:40.327935 kernel: Detected PIPT I-cache on CPU76 Jul 7 02:14:40.327942 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jul 7 02:14:40.327951 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000d80000 Jul 7 02:14:40.327958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.327966 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jul 7 02:14:40.327973 kernel: Detected PIPT I-cache on CPU77 Jul 7 02:14:40.327980 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jul 7 02:14:40.327988 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000d90000 Jul 7 02:14:40.327995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.328002 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jul 7 02:14:40.328010 kernel: Detected PIPT I-cache on CPU78 Jul 7 02:14:40.328018 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jul 7 02:14:40.328026 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000da0000 Jul 7 02:14:40.328033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.328040 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jul 7 02:14:40.328047 kernel: Detected PIPT I-cache on CPU79 Jul 7 02:14:40.328055 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jul 7 02:14:40.328062 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000db0000 Jul 7 02:14:40.328069 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 02:14:40.328077 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jul 7 02:14:40.328084 kernel: smp: Brought up 1 node, 80 CPUs Jul 7 02:14:40.328092 kernel: SMP: Total of 80 processors activated. Jul 7 02:14:40.328099 kernel: CPU: All CPU(s) started at EL2 Jul 7 02:14:40.328106 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 02:14:40.328114 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 02:14:40.328121 kernel: CPU features: detected: Common not Private translations Jul 7 02:14:40.328128 kernel: CPU features: detected: CRC32 instructions Jul 7 02:14:40.328136 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 02:14:40.328143 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 02:14:40.328150 kernel: CPU features: detected: LSE atomic instructions Jul 7 02:14:40.328158 kernel: CPU features: detected: Privileged Access Never Jul 7 02:14:40.328166 kernel: CPU features: detected: RAS Extension Support Jul 7 02:14:40.328173 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 02:14:40.328180 kernel: alternatives: applying system-wide alternatives Jul 7 02:14:40.328187 kernel: CPU features: detected: Hardware dirty bit management on CPU0-79 Jul 7 02:14:40.328195 kernel: Memory: 262843548K/268174336K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 5254856K reserved, 16384K cma-reserved) Jul 7 02:14:40.328202 kernel: devtmpfs: initialized Jul 7 02:14:40.328210 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 02:14:40.328217 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.328226 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 02:14:40.328233 kernel: 0 pages in range for non-PLT usage Jul 7 02:14:40.328240 kernel: 508432 pages in range for PLT usage Jul 7 02:14:40.328248 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 02:14:40.328255 kernel: SMBIOS 3.4.0 present. Jul 7 02:14:40.328262 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Jul 7 02:14:40.328269 kernel: DMI: Memory slots populated: 8/16 Jul 7 02:14:40.328277 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 02:14:40.328284 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jul 7 02:14:40.328293 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 02:14:40.328300 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 02:14:40.328307 kernel: audit: initializing netlink subsys (disabled) Jul 7 02:14:40.328315 kernel: audit: type=2000 audit(0.068:1): state=initialized audit_enabled=0 res=1 Jul 7 02:14:40.328322 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 02:14:40.328329 kernel: cpuidle: using governor menu Jul 7 02:14:40.328336 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 02:14:40.328344 kernel: ASID allocator initialised with 32768 entries Jul 7 02:14:40.328351 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 02:14:40.328360 kernel: Serial: AMBA PL011 UART driver Jul 7 02:14:40.328367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 02:14:40.328375 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 02:14:40.328382 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 02:14:40.328389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 02:14:40.328396 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 02:14:40.328404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 02:14:40.328411 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 02:14:40.328419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 02:14:40.328427 kernel: ACPI: Added _OSI(Module Device) Jul 7 02:14:40.328434 kernel: ACPI: Added _OSI(Processor Device) Jul 7 02:14:40.328441 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 02:14:40.328448 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jul 7 02:14:40.328456 kernel: ACPI: Interpreter enabled Jul 7 02:14:40.328463 kernel: ACPI: Using GIC for interrupt routing Jul 7 02:14:40.328470 kernel: ACPI: MCFG table detected, 8 entries Jul 7 02:14:40.328477 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328485 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328494 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328501 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328508 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328515 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328523 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328530 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jul 7 02:14:40.328537 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Jul 7 02:14:40.328545 kernel: printk: legacy console [ttyAMA0] enabled Jul 7 02:14:40.328552 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Jul 7 02:14:40.328561 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jul 7 02:14:40.328690 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.328759 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.328817 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.328875 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.328931 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.328990 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jul 7 02:14:40.328999 kernel: PCI host bridge to bus 000d:00 Jul 7 02:14:40.329066 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jul 7 02:14:40.329120 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jul 7 02:14:40.329172 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jul 7 02:14:40.329249 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.329321 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.329383 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.329442 kernel: pci 000d:00:01.0: enabling Extended Tags Jul 7 02:14:40.329500 kernel: pci 000d:00:01.0: supports D1 D2 Jul 7 02:14:40.329559 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.329627 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.329740 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.329808 kernel: pci 000d:00:02.0: supports D1 D2 Jul 7 02:14:40.329868 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.329935 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.329995 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.330053 kernel: pci 000d:00:03.0: supports D1 D2 Jul 7 02:14:40.330112 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.330180 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.330241 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.330300 kernel: pci 000d:00:04.0: supports D1 D2 Jul 7 02:14:40.330357 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.330366 kernel: acpiphp: Slot [1] registered Jul 7 02:14:40.330374 kernel: acpiphp: Slot [2] registered Jul 7 02:14:40.330381 kernel: acpiphp: Slot [3] registered Jul 7 02:14:40.330389 kernel: acpiphp: Slot [4] registered Jul 7 02:14:40.330441 kernel: pci_bus 000d:00: on NUMA node 0 Jul 7 02:14:40.330502 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.330561 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.330621 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.330680 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.330744 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.330803 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.330862 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.330924 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.330983 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.331044 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.331103 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.331161 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.331221 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff]: assigned Jul 7 02:14:40.331279 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref]: assigned Jul 7 02:14:40.331340 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff]: assigned Jul 7 02:14:40.331399 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref]: assigned Jul 7 02:14:40.331457 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff]: assigned Jul 7 02:14:40.331516 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref]: assigned Jul 7 02:14:40.331574 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff]: assigned Jul 7 02:14:40.331632 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref]: assigned Jul 7 02:14:40.331694 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.331752 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.331813 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.331871 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.331930 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.331988 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.332047 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.332104 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.332163 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.332223 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.332281 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.332339 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.332397 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.332455 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.332513 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.332571 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.332629 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.332692 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jul 7 02:14:40.332750 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 7 02:14:40.332809 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.332868 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jul 7 02:14:40.332927 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 7 02:14:40.332986 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.333044 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jul 7 02:14:40.333104 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 7 02:14:40.333165 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.333224 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jul 7 02:14:40.333283 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 7 02:14:40.333337 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jul 7 02:14:40.333389 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jul 7 02:14:40.333455 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jul 7 02:14:40.333510 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 7 02:14:40.333572 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jul 7 02:14:40.333627 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 7 02:14:40.333700 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jul 7 02:14:40.333756 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 7 02:14:40.333821 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jul 7 02:14:40.333876 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 7 02:14:40.333885 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jul 7 02:14:40.333951 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.334010 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.334066 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.334122 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.334180 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.334237 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jul 7 02:14:40.334247 kernel: PCI host bridge to bus 0000:00 Jul 7 02:14:40.334307 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jul 7 02:14:40.334360 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jul 7 02:14:40.334412 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 02:14:40.334480 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.334552 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.334613 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.334673 kernel: pci 0000:00:01.0: enabling Extended Tags Jul 7 02:14:40.334736 kernel: pci 0000:00:01.0: supports D1 D2 Jul 7 02:14:40.334795 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.334861 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.334920 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.334981 kernel: pci 0000:00:02.0: supports D1 D2 Jul 7 02:14:40.335038 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.335104 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.335163 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.335221 kernel: pci 0000:00:03.0: supports D1 D2 Jul 7 02:14:40.335279 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.335346 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.335407 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.335465 kernel: pci 0000:00:04.0: supports D1 D2 Jul 7 02:14:40.335524 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.335533 kernel: acpiphp: Slot [1-1] registered Jul 7 02:14:40.335540 kernel: acpiphp: Slot [2-1] registered Jul 7 02:14:40.335547 kernel: acpiphp: Slot [3-1] registered Jul 7 02:14:40.335555 kernel: acpiphp: Slot [4-1] registered Jul 7 02:14:40.335606 kernel: pci_bus 0000:00: on NUMA node 0 Jul 7 02:14:40.335666 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.335728 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.335787 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.335846 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.335904 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.335962 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.336021 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.336081 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.336140 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.336198 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.336257 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.336315 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.336373 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff]: assigned Jul 7 02:14:40.336433 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref]: assigned Jul 7 02:14:40.336493 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff]: assigned Jul 7 02:14:40.336551 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref]: assigned Jul 7 02:14:40.336609 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff]: assigned Jul 7 02:14:40.336668 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref]: assigned Jul 7 02:14:40.336729 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff]: assigned Jul 7 02:14:40.336787 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref]: assigned Jul 7 02:14:40.336845 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.336905 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.336963 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337022 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337081 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337139 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337197 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337255 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337314 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337373 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337431 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337489 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337548 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337606 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337665 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.337728 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.337787 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.337845 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jul 7 02:14:40.337903 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 7 02:14:40.337961 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.338020 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jul 7 02:14:40.338080 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 7 02:14:40.338138 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.338197 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jul 7 02:14:40.338254 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 7 02:14:40.338313 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.338372 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jul 7 02:14:40.338429 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 7 02:14:40.338484 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jul 7 02:14:40.338537 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jul 7 02:14:40.338599 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jul 7 02:14:40.338655 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 7 02:14:40.338720 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jul 7 02:14:40.338775 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 7 02:14:40.338845 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jul 7 02:14:40.338900 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 7 02:14:40.338962 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jul 7 02:14:40.339017 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 7 02:14:40.339027 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jul 7 02:14:40.339091 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.339150 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.339207 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.339262 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.339318 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.339373 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jul 7 02:14:40.339383 kernel: PCI host bridge to bus 0005:00 Jul 7 02:14:40.339442 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jul 7 02:14:40.339496 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jul 7 02:14:40.339548 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jul 7 02:14:40.339612 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.339680 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.339745 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.339803 kernel: pci 0005:00:01.0: supports D1 D2 Jul 7 02:14:40.339861 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.339929 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.339989 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 7 02:14:40.340047 kernel: pci 0005:00:03.0: supports D1 D2 Jul 7 02:14:40.340105 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.340171 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.340230 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 7 02:14:40.340288 kernel: pci 0005:00:05.0: bridge window [mem 0x30100000-0x301fffff] Jul 7 02:14:40.340347 kernel: pci 0005:00:05.0: supports D1 D2 Jul 7 02:14:40.340406 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.340470 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.340529 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 7 02:14:40.340588 kernel: pci 0005:00:07.0: bridge window [mem 0x30000000-0x300fffff] Jul 7 02:14:40.340646 kernel: pci 0005:00:07.0: supports D1 D2 Jul 7 02:14:40.340708 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.340720 kernel: acpiphp: Slot [1-2] registered Jul 7 02:14:40.340727 kernel: acpiphp: Slot [2-2] registered Jul 7 02:14:40.340793 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 7 02:14:40.340857 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30110000-0x30113fff 64bit] Jul 7 02:14:40.340919 kernel: pci 0005:03:00.0: ROM [mem 0x30100000-0x3010ffff pref] Jul 7 02:14:40.340985 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 7 02:14:40.341046 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30010000-0x30013fff 64bit] Jul 7 02:14:40.341106 kernel: pci 0005:04:00.0: ROM [mem 0x30000000-0x3000ffff pref] Jul 7 02:14:40.341161 kernel: pci_bus 0005:00: on NUMA node 0 Jul 7 02:14:40.341223 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.341281 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.341340 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.341403 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.341462 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.341524 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.341586 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.341646 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.341708 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 02:14:40.341767 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.341829 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.341889 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jul 7 02:14:40.341951 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff]: assigned Jul 7 02:14:40.342009 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref]: assigned Jul 7 02:14:40.342068 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff]: assigned Jul 7 02:14:40.342125 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref]: assigned Jul 7 02:14:40.342184 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff]: assigned Jul 7 02:14:40.342243 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref]: assigned Jul 7 02:14:40.342301 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff]: assigned Jul 7 02:14:40.342359 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref]: assigned Jul 7 02:14:40.342419 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.342478 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.342537 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.342595 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.342654 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.342716 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.342774 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.342833 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.342894 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.342952 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.343011 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.343069 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.343142 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.343202 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.343260 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.343320 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.343379 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.343438 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jul 7 02:14:40.343496 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 7 02:14:40.343553 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 7 02:14:40.343612 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jul 7 02:14:40.343670 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 7 02:14:40.343743 kernel: pci 0005:03:00.0: ROM [mem 0x30400000-0x3040ffff pref]: assigned Jul 7 02:14:40.343805 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30410000-0x30413fff 64bit]: assigned Jul 7 02:14:40.343864 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 7 02:14:40.343923 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jul 7 02:14:40.343981 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 7 02:14:40.344043 kernel: pci 0005:04:00.0: ROM [mem 0x30600000-0x3060ffff pref]: assigned Jul 7 02:14:40.344103 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30610000-0x30613fff 64bit]: assigned Jul 7 02:14:40.344161 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 7 02:14:40.344222 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jul 7 02:14:40.344280 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 7 02:14:40.344334 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jul 7 02:14:40.344387 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jul 7 02:14:40.344451 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jul 7 02:14:40.344505 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 7 02:14:40.344575 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jul 7 02:14:40.344632 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 7 02:14:40.344697 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jul 7 02:14:40.344752 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 7 02:14:40.344815 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jul 7 02:14:40.344870 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 7 02:14:40.344881 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jul 7 02:14:40.344946 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.345003 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.345060 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.345116 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.345172 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.345228 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jul 7 02:14:40.345239 kernel: PCI host bridge to bus 0003:00 Jul 7 02:14:40.345300 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jul 7 02:14:40.345353 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jul 7 02:14:40.345405 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jul 7 02:14:40.345470 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.345538 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.345597 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.345657 kernel: pci 0003:00:01.0: supports D1 D2 Jul 7 02:14:40.345720 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.345786 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.345845 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 02:14:40.345903 kernel: pci 0003:00:03.0: supports D1 D2 Jul 7 02:14:40.345960 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.346041 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.346103 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 7 02:14:40.346161 kernel: pci 0003:00:05.0: bridge window [io 0x0000-0x0fff] Jul 7 02:14:40.346219 kernel: pci 0003:00:05.0: bridge window [mem 0x10000000-0x100fffff] Jul 7 02:14:40.346277 kernel: pci 0003:00:05.0: bridge window [mem 0x240000000000-0x2400000fffff 64bit pref] Jul 7 02:14:40.346335 kernel: pci 0003:00:05.0: supports D1 D2 Jul 7 02:14:40.346393 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.346402 kernel: acpiphp: Slot [1-3] registered Jul 7 02:14:40.346411 kernel: acpiphp: Slot [2-3] registered Jul 7 02:14:40.346479 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 7 02:14:40.346540 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10020000-0x1003ffff] Jul 7 02:14:40.346600 kernel: pci 0003:03:00.0: BAR 2 [io 0x0020-0x003f] Jul 7 02:14:40.346660 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10044000-0x10047fff] Jul 7 02:14:40.346724 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 02:14:40.346784 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x240000063fff 64bit pref] Jul 7 02:14:40.346843 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x24000007ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 02:14:40.346905 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x240000043fff 64bit pref] Jul 7 02:14:40.346964 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x24000005ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 7 02:14:40.347024 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jul 7 02:14:40.347091 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 7 02:14:40.347152 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10000000-0x1001ffff] Jul 7 02:14:40.347211 kernel: pci 0003:03:00.1: BAR 2 [io 0x0000-0x001f] Jul 7 02:14:40.347271 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10040000-0x10043fff] Jul 7 02:14:40.347332 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jul 7 02:14:40.347392 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x240000023fff 64bit pref] Jul 7 02:14:40.347462 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x24000003ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 02:14:40.347522 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x240000003fff 64bit pref] Jul 7 02:14:40.347583 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x24000001ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 7 02:14:40.347637 kernel: pci_bus 0003:00: on NUMA node 0 Jul 7 02:14:40.347699 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.347758 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.347820 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.347879 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.347938 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.347996 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.348056 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Jul 7 02:14:40.348114 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Jul 7 02:14:40.348174 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jul 7 02:14:40.348233 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref]: assigned Jul 7 02:14:40.348291 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff]: assigned Jul 7 02:14:40.348349 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref]: assigned Jul 7 02:14:40.348408 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff]: assigned Jul 7 02:14:40.348466 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref]: assigned Jul 7 02:14:40.348524 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.348582 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.348642 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.348706 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.348767 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.348826 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.348884 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.348943 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.349002 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.349062 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.349121 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.349179 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.349237 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.349297 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 7 02:14:40.349355 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 02:14:40.349412 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 02:14:40.349473 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 7 02:14:40.349531 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 02:14:40.349592 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10400000-0x1041ffff]: assigned Jul 7 02:14:40.349653 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10420000-0x1043ffff]: assigned Jul 7 02:14:40.349717 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10440000-0x10443fff]: assigned Jul 7 02:14:40.349777 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000400000-0x24000041ffff 64bit pref]: assigned Jul 7 02:14:40.349837 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000420000-0x24000043ffff 64bit pref]: assigned Jul 7 02:14:40.349899 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10444000-0x10447fff]: assigned Jul 7 02:14:40.349959 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000440000-0x24000045ffff 64bit pref]: assigned Jul 7 02:14:40.350018 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000460000-0x24000047ffff 64bit pref]: assigned Jul 7 02:14:40.350078 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 02:14:40.350138 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 7 02:14:40.350198 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 02:14:40.350257 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 7 02:14:40.350319 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 02:14:40.350378 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 7 02:14:40.350438 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 02:14:40.350497 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 7 02:14:40.350556 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 7 02:14:40.350614 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 7 02:14:40.350673 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 7 02:14:40.350730 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 02:14:40.350784 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jul 7 02:14:40.350836 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jul 7 02:14:40.350900 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jul 7 02:14:40.350955 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 02:14:40.351025 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jul 7 02:14:40.351080 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 02:14:40.351141 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jul 7 02:14:40.351197 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 7 02:14:40.351207 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jul 7 02:14:40.351271 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.351328 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.351385 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.351440 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.351498 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.351554 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jul 7 02:14:40.351564 kernel: PCI host bridge to bus 000c:00 Jul 7 02:14:40.351624 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jul 7 02:14:40.351677 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jul 7 02:14:40.351736 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jul 7 02:14:40.351801 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.351871 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.351931 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.351990 kernel: pci 000c:00:01.0: enabling Extended Tags Jul 7 02:14:40.352048 kernel: pci 000c:00:01.0: supports D1 D2 Jul 7 02:14:40.352106 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.352172 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.352231 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.352292 kernel: pci 000c:00:02.0: supports D1 D2 Jul 7 02:14:40.352350 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.352417 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.352476 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.352535 kernel: pci 000c:00:03.0: supports D1 D2 Jul 7 02:14:40.352592 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.352657 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.352723 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.352782 kernel: pci 000c:00:04.0: supports D1 D2 Jul 7 02:14:40.352842 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.352852 kernel: acpiphp: Slot [1-4] registered Jul 7 02:14:40.352860 kernel: acpiphp: Slot [2-4] registered Jul 7 02:14:40.352867 kernel: acpiphp: Slot [3-2] registered Jul 7 02:14:40.352875 kernel: acpiphp: Slot [4-2] registered Jul 7 02:14:40.352926 kernel: pci_bus 000c:00: on NUMA node 0 Jul 7 02:14:40.352984 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.353046 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.353105 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.353164 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.353222 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.353281 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.353340 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.353398 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.353459 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.353517 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.353575 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.353634 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.353695 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff]: assigned Jul 7 02:14:40.353754 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref]: assigned Jul 7 02:14:40.353812 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff]: assigned Jul 7 02:14:40.353873 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref]: assigned Jul 7 02:14:40.353931 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff]: assigned Jul 7 02:14:40.353989 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref]: assigned Jul 7 02:14:40.354047 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff]: assigned Jul 7 02:14:40.354105 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref]: assigned Jul 7 02:14:40.354165 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354223 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354284 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354342 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354400 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354459 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354518 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354576 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354634 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354695 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354754 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354814 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354872 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.354931 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.354989 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.355047 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.355105 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.355163 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jul 7 02:14:40.355222 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 7 02:14:40.355282 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.355342 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jul 7 02:14:40.355400 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 7 02:14:40.355459 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.355519 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jul 7 02:14:40.355577 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 7 02:14:40.355635 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.355695 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jul 7 02:14:40.355754 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 7 02:14:40.355807 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jul 7 02:14:40.355860 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jul 7 02:14:40.355924 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jul 7 02:14:40.355979 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 7 02:14:40.356041 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jul 7 02:14:40.356095 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 7 02:14:40.356164 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jul 7 02:14:40.356222 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 7 02:14:40.356285 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jul 7 02:14:40.356341 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 7 02:14:40.356350 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jul 7 02:14:40.356414 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.356471 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.356527 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.356585 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.356641 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.356703 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jul 7 02:14:40.356714 kernel: PCI host bridge to bus 0002:00 Jul 7 02:14:40.356773 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jul 7 02:14:40.356826 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jul 7 02:14:40.356878 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jul 7 02:14:40.356945 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.357011 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.357070 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.357129 kernel: pci 0002:00:01.0: supports D1 D2 Jul 7 02:14:40.357188 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.357255 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.357315 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 7 02:14:40.357376 kernel: pci 0002:00:03.0: supports D1 D2 Jul 7 02:14:40.357434 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.357500 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.357560 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 7 02:14:40.357618 kernel: pci 0002:00:05.0: supports D1 D2 Jul 7 02:14:40.357676 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.357747 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.357809 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 7 02:14:40.357867 kernel: pci 0002:00:07.0: supports D1 D2 Jul 7 02:14:40.357925 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.357935 kernel: acpiphp: Slot [1-5] registered Jul 7 02:14:40.357943 kernel: acpiphp: Slot [2-5] registered Jul 7 02:14:40.357950 kernel: acpiphp: Slot [3-3] registered Jul 7 02:14:40.357958 kernel: acpiphp: Slot [4-3] registered Jul 7 02:14:40.358009 kernel: pci_bus 0002:00: on NUMA node 0 Jul 7 02:14:40.358068 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.358129 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.358188 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 02:14:40.358247 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.358306 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.358364 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.358423 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.358483 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.358542 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.358600 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.358658 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.358721 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.358780 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff]: assigned Jul 7 02:14:40.358839 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref]: assigned Jul 7 02:14:40.358899 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff]: assigned Jul 7 02:14:40.358957 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref]: assigned Jul 7 02:14:40.359016 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff]: assigned Jul 7 02:14:40.359076 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref]: assigned Jul 7 02:14:40.359136 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff]: assigned Jul 7 02:14:40.359195 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref]: assigned Jul 7 02:14:40.359254 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.359312 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.359373 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.359432 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.359490 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.359551 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.359610 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.359668 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.359757 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.359817 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.359878 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.359937 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.359995 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.360053 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.360112 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.360170 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.360228 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.360286 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jul 7 02:14:40.360347 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 7 02:14:40.360406 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 7 02:14:40.360464 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jul 7 02:14:40.360523 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 7 02:14:40.360581 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 7 02:14:40.360640 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jul 7 02:14:40.360707 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 7 02:14:40.360768 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 7 02:14:40.360827 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jul 7 02:14:40.360886 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 7 02:14:40.360940 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jul 7 02:14:40.360992 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jul 7 02:14:40.361056 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jul 7 02:14:40.361112 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 7 02:14:40.361175 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jul 7 02:14:40.361230 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 7 02:14:40.361290 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jul 7 02:14:40.361345 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 7 02:14:40.361413 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jul 7 02:14:40.361468 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 7 02:14:40.361480 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jul 7 02:14:40.361544 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.361602 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.361658 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.361737 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.361797 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.361856 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jul 7 02:14:40.361867 kernel: PCI host bridge to bus 0001:00 Jul 7 02:14:40.361927 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jul 7 02:14:40.361979 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jul 7 02:14:40.362031 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jul 7 02:14:40.362098 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.362167 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.362227 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.362286 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 7 02:14:40.362344 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 7 02:14:40.362402 kernel: pci 0001:00:01.0: enabling Extended Tags Jul 7 02:14:40.362461 kernel: pci 0001:00:01.0: supports D1 D2 Jul 7 02:14:40.362520 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.362587 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.362646 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.362710 kernel: pci 0001:00:02.0: supports D1 D2 Jul 7 02:14:40.362768 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.362833 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.362893 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.362951 kernel: pci 0001:00:03.0: supports D1 D2 Jul 7 02:14:40.363011 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.363077 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.363138 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.363196 kernel: pci 0001:00:04.0: supports D1 D2 Jul 7 02:14:40.363255 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.363265 kernel: acpiphp: Slot [1-6] registered Jul 7 02:14:40.363331 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 02:14:40.363393 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 7 02:14:40.363455 kernel: pci 0001:01:00.0: ROM [mem 0x60100000-0x601fffff pref] Jul 7 02:14:40.363515 kernel: pci 0001:01:00.0: PME# supported from D3cold Jul 7 02:14:40.363576 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 02:14:40.363643 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 02:14:40.363707 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 7 02:14:40.363768 kernel: pci 0001:01:00.1: ROM [mem 0x60000000-0x600fffff pref] Jul 7 02:14:40.363827 kernel: pci 0001:01:00.1: PME# supported from D3cold Jul 7 02:14:40.363839 kernel: acpiphp: Slot [2-6] registered Jul 7 02:14:40.363847 kernel: acpiphp: Slot [3-4] registered Jul 7 02:14:40.363854 kernel: acpiphp: Slot [4-4] registered Jul 7 02:14:40.363906 kernel: pci_bus 0001:00: on NUMA node 0 Jul 7 02:14:40.363965 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 02:14:40.364025 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 02:14:40.364083 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.364142 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 02:14:40.364203 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.364262 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.364321 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.364381 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.364440 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.364499 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.364559 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref]: assigned Jul 7 02:14:40.364618 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff]: assigned Jul 7 02:14:40.364676 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff]: assigned Jul 7 02:14:40.364738 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref]: assigned Jul 7 02:14:40.364796 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff]: assigned Jul 7 02:14:40.364855 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref]: assigned Jul 7 02:14:40.364913 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff]: assigned Jul 7 02:14:40.364971 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref]: assigned Jul 7 02:14:40.365032 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365090 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365148 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365206 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365265 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365323 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365381 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365440 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365500 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365557 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365616 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365673 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365735 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365793 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365851 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.365910 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.365973 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref]: assigned Jul 7 02:14:40.366035 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref]: assigned Jul 7 02:14:40.366095 kernel: pci 0001:01:00.0: ROM [mem 0x60000000-0x600fffff pref]: assigned Jul 7 02:14:40.366155 kernel: pci 0001:01:00.1: ROM [mem 0x60100000-0x601fffff pref]: assigned Jul 7 02:14:40.366214 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 7 02:14:40.366273 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 7 02:14:40.366331 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 7 02:14:40.366390 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 7 02:14:40.366451 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jul 7 02:14:40.366509 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 7 02:14:40.366568 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.366626 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jul 7 02:14:40.366688 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 7 02:14:40.366747 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 7 02:14:40.366807 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jul 7 02:14:40.366868 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 7 02:14:40.366921 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jul 7 02:14:40.366973 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jul 7 02:14:40.367036 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jul 7 02:14:40.367091 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 7 02:14:40.367160 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jul 7 02:14:40.367217 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 7 02:14:40.367279 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jul 7 02:14:40.367334 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 7 02:14:40.367395 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jul 7 02:14:40.367449 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 7 02:14:40.367459 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jul 7 02:14:40.367524 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:14:40.367582 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 02:14:40.367638 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jul 7 02:14:40.367698 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 02:14:40.367755 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jul 7 02:14:40.367811 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jul 7 02:14:40.367821 kernel: PCI host bridge to bus 0004:00 Jul 7 02:14:40.367881 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jul 7 02:14:40.367934 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jul 7 02:14:40.367985 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jul 7 02:14:40.368050 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 02:14:40.368116 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.368175 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 7 02:14:40.368234 kernel: pci 0004:00:01.0: bridge window [io 0x0000-0x0fff] Jul 7 02:14:40.368294 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x220fffff] Jul 7 02:14:40.368352 kernel: pci 0004:00:01.0: supports D1 D2 Jul 7 02:14:40.368410 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.368475 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.368534 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.368592 kernel: pci 0004:00:03.0: bridge window [mem 0x22200000-0x222fffff] Jul 7 02:14:40.368650 kernel: pci 0004:00:03.0: supports D1 D2 Jul 7 02:14:40.368714 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.368779 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 02:14:40.368839 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 7 02:14:40.368898 kernel: pci 0004:00:05.0: supports D1 D2 Jul 7 02:14:40.368957 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jul 7 02:14:40.369023 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 02:14:40.369084 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 7 02:14:40.369148 kernel: pci 0004:01:00.0: bridge window [io 0x0000-0x0fff] Jul 7 02:14:40.369208 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x220fffff] Jul 7 02:14:40.369267 kernel: pci 0004:01:00.0: enabling Extended Tags Jul 7 02:14:40.369327 kernel: pci 0004:01:00.0: supports D1 D2 Jul 7 02:14:40.369388 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 02:14:40.369455 kernel: pci_bus 0004:02: extended config space not accessible Jul 7 02:14:40.369525 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 7 02:14:40.369591 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff] Jul 7 02:14:40.369653 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff] Jul 7 02:14:40.369720 kernel: pci 0004:02:00.0: BAR 2 [io 0x0000-0x007f] Jul 7 02:14:40.369782 kernel: pci 0004:02:00.0: supports D1 D2 Jul 7 02:14:40.369844 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 02:14:40.369919 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 PCIe Endpoint Jul 7 02:14:40.369981 kernel: pci 0004:03:00.0: BAR 0 [mem 0x22200000-0x22201fff 64bit] Jul 7 02:14:40.370044 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 02:14:40.370097 kernel: pci_bus 0004:00: on NUMA node 0 Jul 7 02:14:40.370156 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jul 7 02:14:40.370216 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:14:40.370275 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 02:14:40.370333 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 02:14:40.370394 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:14:40.370455 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.370513 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 02:14:40.370571 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 7 02:14:40.370630 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref]: assigned Jul 7 02:14:40.370692 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff]: assigned Jul 7 02:14:40.370751 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref]: assigned Jul 7 02:14:40.370810 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff]: assigned Jul 7 02:14:40.370871 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref]: assigned Jul 7 02:14:40.370931 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.370991 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371049 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.371108 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371166 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.371224 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371282 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.371343 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371401 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.371460 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371518 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.371578 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371639 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 7 02:14:40.371703 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 02:14:40.371764 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: failed to assign Jul 7 02:14:40.371829 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff]: assigned Jul 7 02:14:40.371891 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff]: assigned Jul 7 02:14:40.371954 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: can't assign; no space Jul 7 02:14:40.372016 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: failed to assign Jul 7 02:14:40.372076 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 7 02:14:40.372136 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jul 7 02:14:40.372194 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 7 02:14:40.372254 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jul 7 02:14:40.372314 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 7 02:14:40.372375 kernel: pci 0004:03:00.0: BAR 0 [mem 0x23000000-0x23001fff 64bit]: assigned Jul 7 02:14:40.372434 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 7 02:14:40.372492 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jul 7 02:14:40.372551 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 7 02:14:40.372609 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 7 02:14:40.372668 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jul 7 02:14:40.372733 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 7 02:14:40.372786 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 02:14:40.372839 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jul 7 02:14:40.372891 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jul 7 02:14:40.372953 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jul 7 02:14:40.373010 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 7 02:14:40.373068 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jul 7 02:14:40.373134 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jul 7 02:14:40.373188 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 7 02:14:40.373250 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jul 7 02:14:40.373305 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 7 02:14:40.373314 kernel: ACPI: CPU18 has been hot-added Jul 7 02:14:40.373322 kernel: ACPI: CPU58 has been hot-added Jul 7 02:14:40.373330 kernel: ACPI: CPU38 has been hot-added Jul 7 02:14:40.373339 kernel: ACPI: CPU78 has been hot-added Jul 7 02:14:40.373347 kernel: ACPI: CPU16 has been hot-added Jul 7 02:14:40.373354 kernel: ACPI: CPU56 has been hot-added Jul 7 02:14:40.373362 kernel: ACPI: CPU36 has been hot-added Jul 7 02:14:40.373369 kernel: ACPI: CPU76 has been hot-added Jul 7 02:14:40.373377 kernel: ACPI: CPU17 has been hot-added Jul 7 02:14:40.373384 kernel: ACPI: CPU57 has been hot-added Jul 7 02:14:40.373392 kernel: ACPI: CPU37 has been hot-added Jul 7 02:14:40.373399 kernel: ACPI: CPU77 has been hot-added Jul 7 02:14:40.373408 kernel: ACPI: CPU19 has been hot-added Jul 7 02:14:40.373416 kernel: ACPI: CPU59 has been hot-added Jul 7 02:14:40.373423 kernel: ACPI: CPU39 has been hot-added Jul 7 02:14:40.373431 kernel: ACPI: CPU79 has been hot-added Jul 7 02:14:40.373438 kernel: ACPI: CPU12 has been hot-added Jul 7 02:14:40.373446 kernel: ACPI: CPU52 has been hot-added Jul 7 02:14:40.373453 kernel: ACPI: CPU32 has been hot-added Jul 7 02:14:40.373461 kernel: ACPI: CPU72 has been hot-added Jul 7 02:14:40.373469 kernel: ACPI: CPU8 has been hot-added Jul 7 02:14:40.373476 kernel: ACPI: CPU48 has been hot-added Jul 7 02:14:40.373485 kernel: ACPI: CPU28 has been hot-added Jul 7 02:14:40.373493 kernel: ACPI: CPU68 has been hot-added Jul 7 02:14:40.373500 kernel: ACPI: CPU10 has been hot-added Jul 7 02:14:40.373508 kernel: ACPI: CPU50 has been hot-added Jul 7 02:14:40.373515 kernel: ACPI: CPU30 has been hot-added Jul 7 02:14:40.373523 kernel: ACPI: CPU70 has been hot-added Jul 7 02:14:40.373531 kernel: ACPI: CPU14 has been hot-added Jul 7 02:14:40.373538 kernel: ACPI: CPU54 has been hot-added Jul 7 02:14:40.373546 kernel: ACPI: CPU34 has been hot-added Jul 7 02:14:40.373555 kernel: ACPI: CPU74 has been hot-added Jul 7 02:14:40.373563 kernel: ACPI: CPU4 has been hot-added Jul 7 02:14:40.373571 kernel: ACPI: CPU44 has been hot-added Jul 7 02:14:40.373578 kernel: ACPI: CPU24 has been hot-added Jul 7 02:14:40.373586 kernel: ACPI: CPU64 has been hot-added Jul 7 02:14:40.373593 kernel: ACPI: CPU0 has been hot-added Jul 7 02:14:40.373601 kernel: ACPI: CPU40 has been hot-added Jul 7 02:14:40.373608 kernel: ACPI: CPU20 has been hot-added Jul 7 02:14:40.373616 kernel: ACPI: CPU60 has been hot-added Jul 7 02:14:40.373623 kernel: ACPI: CPU2 has been hot-added Jul 7 02:14:40.373632 kernel: ACPI: CPU42 has been hot-added Jul 7 02:14:40.373640 kernel: ACPI: CPU22 has been hot-added Jul 7 02:14:40.373647 kernel: ACPI: CPU62 has been hot-added Jul 7 02:14:40.373655 kernel: ACPI: CPU6 has been hot-added Jul 7 02:14:40.373663 kernel: ACPI: CPU46 has been hot-added Jul 7 02:14:40.373670 kernel: ACPI: CPU26 has been hot-added Jul 7 02:14:40.373678 kernel: ACPI: CPU66 has been hot-added Jul 7 02:14:40.373688 kernel: ACPI: CPU5 has been hot-added Jul 7 02:14:40.373696 kernel: ACPI: CPU45 has been hot-added Jul 7 02:14:40.373705 kernel: ACPI: CPU25 has been hot-added Jul 7 02:14:40.373713 kernel: ACPI: CPU65 has been hot-added Jul 7 02:14:40.373720 kernel: ACPI: CPU1 has been hot-added Jul 7 02:14:40.373728 kernel: ACPI: CPU41 has been hot-added Jul 7 02:14:40.373736 kernel: ACPI: CPU21 has been hot-added Jul 7 02:14:40.373743 kernel: ACPI: CPU61 has been hot-added Jul 7 02:14:40.373751 kernel: ACPI: CPU3 has been hot-added Jul 7 02:14:40.373758 kernel: ACPI: CPU43 has been hot-added Jul 7 02:14:40.373766 kernel: ACPI: CPU23 has been hot-added Jul 7 02:14:40.373773 kernel: ACPI: CPU63 has been hot-added Jul 7 02:14:40.373782 kernel: ACPI: CPU7 has been hot-added Jul 7 02:14:40.373790 kernel: ACPI: CPU47 has been hot-added Jul 7 02:14:40.373797 kernel: ACPI: CPU27 has been hot-added Jul 7 02:14:40.373805 kernel: ACPI: CPU67 has been hot-added Jul 7 02:14:40.373812 kernel: ACPI: CPU13 has been hot-added Jul 7 02:14:40.373820 kernel: ACPI: CPU53 has been hot-added Jul 7 02:14:40.373827 kernel: ACPI: CPU33 has been hot-added Jul 7 02:14:40.373835 kernel: ACPI: CPU73 has been hot-added Jul 7 02:14:40.373843 kernel: ACPI: CPU9 has been hot-added Jul 7 02:14:40.373851 kernel: ACPI: CPU49 has been hot-added Jul 7 02:14:40.373859 kernel: ACPI: CPU29 has been hot-added Jul 7 02:14:40.373867 kernel: ACPI: CPU69 has been hot-added Jul 7 02:14:40.373874 kernel: ACPI: CPU11 has been hot-added Jul 7 02:14:40.373881 kernel: ACPI: CPU51 has been hot-added Jul 7 02:14:40.373889 kernel: ACPI: CPU31 has been hot-added Jul 7 02:14:40.373896 kernel: ACPI: CPU71 has been hot-added Jul 7 02:14:40.373904 kernel: ACPI: CPU15 has been hot-added Jul 7 02:14:40.373912 kernel: ACPI: CPU55 has been hot-added Jul 7 02:14:40.373919 kernel: ACPI: CPU35 has been hot-added Jul 7 02:14:40.373928 kernel: ACPI: CPU75 has been hot-added Jul 7 02:14:40.373936 kernel: iommu: Default domain type: Translated Jul 7 02:14:40.373943 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 02:14:40.373951 kernel: efivars: Registered efivars operations Jul 7 02:14:40.374015 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jul 7 02:14:40.374078 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jul 7 02:14:40.374140 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jul 7 02:14:40.374150 kernel: vgaarb: loaded Jul 7 02:14:40.374158 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 02:14:40.374167 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 02:14:40.374175 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 02:14:40.374183 kernel: pnp: PnP ACPI init Jul 7 02:14:40.374245 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jul 7 02:14:40.374303 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jul 7 02:14:40.374356 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jul 7 02:14:40.374410 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jul 7 02:14:40.374464 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jul 7 02:14:40.374517 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jul 7 02:14:40.374571 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jul 7 02:14:40.374624 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jul 7 02:14:40.374634 kernel: pnp: PnP ACPI: found 1 devices Jul 7 02:14:40.374642 kernel: NET: Registered PF_INET protocol family Jul 7 02:14:40.374649 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.374657 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 7 02:14:40.374667 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 02:14:40.374675 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 02:14:40.374685 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.374693 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jul 7 02:14:40.374701 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.374708 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 02:14:40.374716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 02:14:40.374778 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jul 7 02:14:40.374790 kernel: kvm [1]: nv: 554 coarse grained trap handlers Jul 7 02:14:40.374798 kernel: kvm [1]: IPA Size Limit: 48 bits Jul 7 02:14:40.374805 kernel: kvm [1]: GICv3: no GICV resource entry Jul 7 02:14:40.374813 kernel: kvm [1]: disabling GICv2 emulation Jul 7 02:14:40.374821 kernel: kvm [1]: GIC system register CPU interface enabled Jul 7 02:14:40.374829 kernel: kvm [1]: vgic interrupt IRQ9 Jul 7 02:14:40.374837 kernel: kvm [1]: VHE mode initialized successfully Jul 7 02:14:40.374844 kernel: Initialise system trusted keyrings Jul 7 02:14:40.374852 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jul 7 02:14:40.374859 kernel: Key type asymmetric registered Jul 7 02:14:40.374868 kernel: Asymmetric key parser 'x509' registered Jul 7 02:14:40.374876 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 7 02:14:40.374883 kernel: io scheduler mq-deadline registered Jul 7 02:14:40.374891 kernel: io scheduler kyber registered Jul 7 02:14:40.374899 kernel: io scheduler bfq registered Jul 7 02:14:40.374907 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 02:14:40.374914 kernel: ACPI: button: Power Button [PWRB] Jul 7 02:14:40.374922 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jul 7 02:14:40.374930 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 02:14:40.374997 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jul 7 02:14:40.375053 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.375111 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.375166 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.375220 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 32768 entries for evtq Jul 7 02:14:40.375274 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for priq Jul 7 02:14:40.375339 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jul 7 02:14:40.375394 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.375449 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.375459 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.375467 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 7 02:14:40.375519 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.375529 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.375536 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 7 02:14:40.375591 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 32768 entries for evtq Jul 7 02:14:40.375601 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.375608 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 7 02:14:40.375660 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 65536 entries for priq Jul 7 02:14:40.375726 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jul 7 02:14:40.375781 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.375835 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.375847 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.375854 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.375909 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.375918 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.375926 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.375978 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 32768 entries for evtq Jul 7 02:14:40.375988 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.375995 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376047 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 65536 entries for priq Jul 7 02:14:40.376059 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 128 pages, ret: -12 Jul 7 02:14:40.376067 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376127 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jul 7 02:14:40.376182 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.376236 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.376246 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.376253 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376305 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.376315 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.376325 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376377 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 32768 entries for evtq Jul 7 02:14:40.376387 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 7 02:14:40.376394 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376446 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 65536 entries for priq Jul 7 02:14:40.376456 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376517 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jul 7 02:14:40.376572 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.376628 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.376638 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376714 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.376725 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376779 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 32768 entries for evtq Jul 7 02:14:40.376789 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376841 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 65536 entries for priq Jul 7 02:14:40.376851 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.376911 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jul 7 02:14:40.376969 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.377023 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.377033 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377085 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.377094 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377147 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 32768 entries for evtq Jul 7 02:14:40.377157 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377209 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 65536 entries for priq Jul 7 02:14:40.377221 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377287 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jul 7 02:14:40.377342 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.377396 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.377406 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377458 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.377468 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377526 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 32768 entries for evtq Jul 7 02:14:40.377536 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377588 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 65536 entries for priq Jul 7 02:14:40.377598 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377656 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jul 7 02:14:40.377715 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 02:14:40.377775 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 02:14:40.377788 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377843 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 65536 entries for cmdq Jul 7 02:14:40.377854 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377908 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 32768 entries for evtq Jul 7 02:14:40.377918 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377971 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 65536 entries for priq Jul 7 02:14:40.377982 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.377990 kernel: thunder_xcv, ver 1.0 Jul 7 02:14:40.377998 kernel: thunder_bgx, ver 1.0 Jul 7 02:14:40.378005 kernel: nicpf, ver 1.0 Jul 7 02:14:40.378014 kernel: nicvf, ver 1.0 Jul 7 02:14:40.378076 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 02:14:40.378131 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T02:14:38 UTC (1751854478) Jul 7 02:14:40.378142 kernel: efifb: probing for efifb Jul 7 02:14:40.378149 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jul 7 02:14:40.378157 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 02:14:40.378165 kernel: efifb: scrolling: redraw Jul 7 02:14:40.378173 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 02:14:40.378183 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 02:14:40.378191 kernel: fb0: EFI VGA frame buffer device Jul 7 02:14:40.378199 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jul 7 02:14:40.378206 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 02:14:40.378214 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 7 02:14:40.378222 kernel: watchdog: NMI not fully supported Jul 7 02:14:40.378230 kernel: NET: Registered PF_INET6 protocol family Jul 7 02:14:40.378237 kernel: watchdog: Hard watchdog permanently disabled Jul 7 02:14:40.378245 kernel: Segment Routing with IPv6 Jul 7 02:14:40.378253 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 02:14:40.378261 kernel: NET: Registered PF_PACKET protocol family Jul 7 02:14:40.378269 kernel: Key type dns_resolver registered Jul 7 02:14:40.378276 kernel: registered taskstats version 1 Jul 7 02:14:40.378284 kernel: Loading compiled-in X.509 certificates Jul 7 02:14:40.378292 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 7 02:14:40.378299 kernel: Demotion targets for Node 0: null Jul 7 02:14:40.378307 kernel: Key type .fscrypt registered Jul 7 02:14:40.378314 kernel: Key type fscrypt-provisioning registered Jul 7 02:14:40.378323 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 02:14:40.378330 kernel: ima: Allocated hash algorithm: sha1 Jul 7 02:14:40.378338 kernel: ima: No architecture policies found Jul 7 02:14:40.378346 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 02:14:40.378353 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.378416 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jul 7 02:14:40.378476 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jul 7 02:14:40.378537 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jul 7 02:14:40.378596 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jul 7 02:14:40.378658 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jul 7 02:14:40.378722 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jul 7 02:14:40.378783 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jul 7 02:14:40.378842 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jul 7 02:14:40.378853 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.378911 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jul 7 02:14:40.378971 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jul 7 02:14:40.379031 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jul 7 02:14:40.379090 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jul 7 02:14:40.379152 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jul 7 02:14:40.379212 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jul 7 02:14:40.379272 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jul 7 02:14:40.379330 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jul 7 02:14:40.379340 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.379398 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jul 7 02:14:40.379457 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jul 7 02:14:40.379517 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jul 7 02:14:40.379578 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jul 7 02:14:40.379638 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jul 7 02:14:40.379701 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jul 7 02:14:40.379761 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jul 7 02:14:40.379820 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jul 7 02:14:40.379831 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.379890 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jul 7 02:14:40.379949 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jul 7 02:14:40.380009 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jul 7 02:14:40.380069 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jul 7 02:14:40.380128 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jul 7 02:14:40.380187 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jul 7 02:14:40.380197 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.380255 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jul 7 02:14:40.380314 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jul 7 02:14:40.380373 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jul 7 02:14:40.380432 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jul 7 02:14:40.380491 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jul 7 02:14:40.380552 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jul 7 02:14:40.380611 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jul 7 02:14:40.380670 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jul 7 02:14:40.380680 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.380743 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jul 7 02:14:40.380805 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jul 7 02:14:40.380868 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jul 7 02:14:40.380927 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jul 7 02:14:40.380990 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jul 7 02:14:40.381049 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jul 7 02:14:40.381110 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jul 7 02:14:40.381169 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jul 7 02:14:40.381179 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.381237 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jul 7 02:14:40.381296 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jul 7 02:14:40.381356 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jul 7 02:14:40.381415 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jul 7 02:14:40.381476 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jul 7 02:14:40.381535 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jul 7 02:14:40.381595 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jul 7 02:14:40.381654 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jul 7 02:14:40.381664 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.381726 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jul 7 02:14:40.381784 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jul 7 02:14:40.381844 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jul 7 02:14:40.381905 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jul 7 02:14:40.381964 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jul 7 02:14:40.382022 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jul 7 02:14:40.382032 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:40.382092 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jul 7 02:14:40.382101 kernel: clk: Disabling unused clocks Jul 7 02:14:40.382109 kernel: PM: genpd: Disabling unused power domains Jul 7 02:14:40.382117 kernel: Warning: unable to open an initial console. Jul 7 02:14:40.382125 kernel: Freeing unused kernel memory: 39488K Jul 7 02:14:40.382134 kernel: Run /init as init process Jul 7 02:14:40.382142 kernel: with arguments: Jul 7 02:14:40.382149 kernel: /init Jul 7 02:14:40.382157 kernel: with environment: Jul 7 02:14:40.382164 kernel: HOME=/ Jul 7 02:14:40.382172 kernel: TERM=linux Jul 7 02:14:40.382179 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 02:14:40.382188 systemd[1]: Successfully made /usr/ read-only. Jul 7 02:14:40.382199 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 02:14:40.382209 systemd[1]: Detected architecture arm64. Jul 7 02:14:40.382216 systemd[1]: Running in initrd. Jul 7 02:14:40.382224 systemd[1]: No hostname configured, using default hostname. Jul 7 02:14:40.382232 systemd[1]: Hostname set to . Jul 7 02:14:40.382240 systemd[1]: Initializing machine ID from random generator. Jul 7 02:14:40.382249 systemd[1]: Queued start job for default target initrd.target. Jul 7 02:14:40.382258 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:14:40.382265 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:14:40.382276 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 02:14:40.382284 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 02:14:40.382292 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 02:14:40.382300 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 02:14:40.382309 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 02:14:40.382318 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 02:14:40.382327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:14:40.382335 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:14:40.382343 systemd[1]: Reached target paths.target - Path Units. Jul 7 02:14:40.382351 systemd[1]: Reached target slices.target - Slice Units. Jul 7 02:14:40.382359 systemd[1]: Reached target swap.target - Swaps. Jul 7 02:14:40.382367 systemd[1]: Reached target timers.target - Timer Units. Jul 7 02:14:40.382375 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 02:14:40.382383 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 02:14:40.382391 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 02:14:40.382401 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 02:14:40.382409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:14:40.382417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 02:14:40.382425 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:14:40.382433 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 02:14:40.382441 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 02:14:40.382449 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 02:14:40.382457 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 02:14:40.382467 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 02:14:40.382475 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 02:14:40.382483 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 02:14:40.382491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 02:14:40.382499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:14:40.382526 systemd-journald[908]: Collecting audit messages is disabled. Jul 7 02:14:40.382547 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 02:14:40.382555 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 02:14:40.382563 kernel: Bridge firewalling registered Jul 7 02:14:40.382571 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:14:40.382582 systemd-journald[908]: Journal started Jul 7 02:14:40.382600 systemd-journald[908]: Runtime Journal (/run/log/journal/a557ad0ac1214a689035ba6f753adca1) is 8M, max 4G, 3.9G free. Jul 7 02:14:40.322832 systemd-modules-load[911]: Inserted module 'overlay' Jul 7 02:14:40.346175 systemd-modules-load[911]: Inserted module 'br_netfilter' Jul 7 02:14:40.430039 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 02:14:40.435808 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 02:14:40.443169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 02:14:40.453850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:14:40.468577 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:14:40.476744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 02:14:40.502389 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 02:14:40.509186 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 02:14:40.528877 systemd-tmpfiles[941]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 02:14:40.538022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:14:40.554357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:14:40.570918 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:14:40.582261 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:14:40.602060 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 02:14:40.638844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 02:14:40.652141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 02:14:40.664898 dracut-cmdline[959]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 7 02:14:40.672496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:14:40.677883 systemd-resolved[962]: Positive Trust Anchors: Jul 7 02:14:40.677892 systemd-resolved[962]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 02:14:40.677923 systemd-resolved[962]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 02:14:40.693690 systemd-resolved[962]: Defaulting to hostname 'linux'. Jul 7 02:14:40.711276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 02:14:40.731582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:14:40.836693 kernel: SCSI subsystem initialized Jul 7 02:14:40.851691 kernel: Loading iSCSI transport class v2.0-870. Jul 7 02:14:40.871691 kernel: iscsi: registered transport (tcp) Jul 7 02:14:40.899089 kernel: iscsi: registered transport (qla4xxx) Jul 7 02:14:40.899110 kernel: QLogic iSCSI HBA Driver Jul 7 02:14:40.917735 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 02:14:40.952751 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 02:14:40.970378 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 02:14:41.021698 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 02:14:41.033423 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 02:14:41.117696 kernel: raid6: neonx8 gen() 15856 MB/s Jul 7 02:14:41.143688 kernel: raid6: neonx4 gen() 15895 MB/s Jul 7 02:14:41.169693 kernel: raid6: neonx2 gen() 13262 MB/s Jul 7 02:14:41.194692 kernel: raid6: neonx1 gen() 10583 MB/s Jul 7 02:14:41.219693 kernel: raid6: int64x8 gen() 6934 MB/s Jul 7 02:14:41.244688 kernel: raid6: int64x4 gen() 7390 MB/s Jul 7 02:14:41.269692 kernel: raid6: int64x2 gen() 6131 MB/s Jul 7 02:14:41.298141 kernel: raid6: int64x1 gen() 5077 MB/s Jul 7 02:14:41.298162 kernel: raid6: using algorithm neonx4 gen() 15895 MB/s Jul 7 02:14:41.333059 kernel: raid6: .... xor() 12375 MB/s, rmw enabled Jul 7 02:14:41.333080 kernel: raid6: using neon recovery algorithm Jul 7 02:14:41.358054 kernel: xor: measuring software checksum speed Jul 7 02:14:41.358076 kernel: 8regs : 21607 MB/sec Jul 7 02:14:41.366452 kernel: 32regs : 21687 MB/sec Jul 7 02:14:41.374713 kernel: arm64_neon : 28273 MB/sec Jul 7 02:14:41.382789 kernel: xor: using function: arm64_neon (28273 MB/sec) Jul 7 02:14:41.448691 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 02:14:41.454392 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 02:14:41.462172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:14:41.506877 systemd-udevd[1181]: Using default interface naming scheme 'v255'. Jul 7 02:14:41.510834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:14:41.517134 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 02:14:41.559461 dracut-pre-trigger[1191]: rd.md=0: removing MD RAID activation Jul 7 02:14:41.581438 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 02:14:41.590935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 02:14:41.899719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:14:42.057254 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 02:14:42.057273 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 02:14:42.057284 kernel: PTP clock support registered Jul 7 02:14:42.057298 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:42.057308 kernel: ACPI: bus type USB registered Jul 7 02:14:42.057317 kernel: usbcore: registered new interface driver usbfs Jul 7 02:14:42.057327 kernel: usbcore: registered new interface driver hub Jul 7 02:14:42.057336 kernel: nvme 0005:03:00.0: Adding to iommu group 31 Jul 7 02:14:42.057484 kernel: usbcore: registered new device driver usb Jul 7 02:14:42.057495 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:42.057504 kernel: nvme 0005:04:00.0: Adding to iommu group 32 Jul 7 02:14:42.057592 kernel: nvme nvme0: pci function 0005:03:00.0 Jul 7 02:14:42.057695 kernel: nvme nvme1: pci function 0005:04:00.0 Jul 7 02:14:42.057778 kernel: nvme nvme0: D3 entry latency set to 8 seconds Jul 7 02:14:42.057842 kernel: nvme nvme1: D3 entry latency set to 8 seconds Jul 7 02:14:42.057245 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 02:14:42.146165 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jul 7 02:14:42.146395 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jul 7 02:14:42.146475 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 02:14:42.146486 kernel: GPT:9289727 != 1875385007 Jul 7 02:14:42.146496 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 02:14:42.146505 kernel: GPT:9289727 != 1875385007 Jul 7 02:14:42.146514 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 02:14:42.146523 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 02:14:42.064705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 02:14:42.278954 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:42.278985 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 33 Jul 7 02:14:42.279178 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 7 02:14:42.279255 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 02:14:42.279333 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jul 7 02:14:42.279408 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 02:14:42.279418 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 02:14:42.279427 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:42.279436 kernel: igb 0003:03:00.0: Adding to iommu group 34 Jul 7 02:14:42.279518 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 7 02:14:42.279528 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Jul 7 02:14:42.064759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:14:42.220631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:14:42.284351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:14:42.293650 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 02:14:42.315703 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 02:14:42.337907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:14:42.367042 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jul 7 02:14:42.452575 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000000100000010 Jul 7 02:14:42.452793 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 7 02:14:42.452872 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 02:14:42.452948 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 02:14:42.453022 kernel: hub 1-0:1.0: USB hub found Jul 7 02:14:42.453115 kernel: hub 1-0:1.0: 4 ports detected Jul 7 02:14:42.453187 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 02:14:42.453274 kernel: hub 2-0:1.0: USB hub found Jul 7 02:14:42.379952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jul 7 02:14:42.599135 kernel: mlx5_core 0001:01:00.0: PTM is not supported by PCIe Jul 7 02:14:42.599258 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Jul 7 02:14:42.599334 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 02:14:42.599407 kernel: hub 2-0:1.0: 4 ports detected Jul 7 02:14:42.599494 kernel: igb 0003:03:00.0: added PHC on eth0 Jul 7 02:14:42.599577 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 02:14:42.599649 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0a:d4:d6 Jul 7 02:14:42.599725 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jul 7 02:14:42.599796 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 7 02:14:42.599866 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jul 7 02:14:42.485316 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 7 02:14:42.622124 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 7 02:14:42.644391 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 7 02:14:42.653127 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 02:14:42.765955 kernel: igb 0003:03:00.1: added PHC on eth1 Jul 7 02:14:42.766072 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jul 7 02:14:42.766146 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0a:d4:d7 Jul 7 02:14:42.766218 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jul 7 02:14:42.766288 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 7 02:14:42.766358 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jul 7 02:14:42.766440 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jul 7 02:14:42.766548 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jul 7 02:14:42.668354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:14:42.771058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 02:14:42.812351 kernel: mlx5_core 0001:01:00.0: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 7 02:14:42.812454 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jul 7 02:14:42.807967 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 02:14:42.836292 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 02:14:42.841998 disk-uuid[1346]: Primary Header is updated. Jul 7 02:14:42.841998 disk-uuid[1346]: Secondary Entries is updated. Jul 7 02:14:42.841998 disk-uuid[1346]: Secondary Header is updated. Jul 7 02:14:42.873342 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 02:14:42.889452 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 02:14:42.912473 kernel: hub 1-3:1.0: USB hub found Jul 7 02:14:42.912615 kernel: hub 1-3:1.0: 4 ports detected Jul 7 02:14:43.013698 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jul 7 02:14:43.048181 kernel: hub 2-3:1.0: USB hub found Jul 7 02:14:43.048361 kernel: hub 2-3:1.0: 4 ports detected Jul 7 02:14:43.131694 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 02:14:43.143690 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jul 7 02:14:43.161233 kernel: mlx5_core 0001:01:00.1: PTM is not supported by PCIe Jul 7 02:14:43.161393 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Jul 7 02:14:43.176163 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 02:14:43.520689 kernel: mlx5_core 0001:01:00.1: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 7 02:14:43.538704 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Jul 7 02:14:43.862698 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 02:14:43.862723 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 02:14:43.862982 disk-uuid[1348]: The operation has completed successfully. Jul 7 02:14:43.907797 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jul 7 02:14:43.907989 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jul 7 02:14:43.943626 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 02:14:43.943742 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 02:14:43.954973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 02:14:43.971726 sh[1536]: Success Jul 7 02:14:44.010777 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 02:14:44.010813 kernel: device-mapper: uevent: version 1.0.3 Jul 7 02:14:44.020398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 02:14:44.047690 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 7 02:14:44.077441 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 02:14:44.089449 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 02:14:44.111664 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 02:14:44.117828 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 02:14:44.117849 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (1551) Jul 7 02:14:44.118712 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 7 02:14:44.118756 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 02:14:44.118773 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 02:14:44.207532 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 02:14:44.213798 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 02:14:44.224075 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 02:14:44.225190 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 02:14:44.243222 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 02:14:44.360124 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:6) scanned by mount (1576) Jul 7 02:14:44.360141 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 02:14:44.360151 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 02:14:44.360160 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 02:14:44.360170 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 02:14:44.355369 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 02:14:44.368431 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 02:14:44.378035 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 02:14:44.393200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 02:14:44.429986 systemd-networkd[1725]: lo: Link UP Jul 7 02:14:44.429991 systemd-networkd[1725]: lo: Gained carrier Jul 7 02:14:44.433409 systemd-networkd[1725]: Enumeration completed Jul 7 02:14:44.433789 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 02:14:44.434660 systemd-networkd[1725]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:14:44.442449 systemd[1]: Reached target network.target - Network. Jul 7 02:14:44.486045 systemd-networkd[1725]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:14:44.504860 ignition[1720]: Ignition 2.21.0 Jul 7 02:14:44.504868 ignition[1720]: Stage: fetch-offline Jul 7 02:14:44.509888 unknown[1720]: fetched base config from "system" Jul 7 02:14:44.504897 ignition[1720]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:14:44.509895 unknown[1720]: fetched user config from "system" Jul 7 02:14:44.504905 ignition[1720]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 02:14:44.512995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 02:14:44.505093 ignition[1720]: parsed url from cmdline: "" Jul 7 02:14:44.521653 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 02:14:44.505096 ignition[1720]: no config URL provided Jul 7 02:14:44.522716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 02:14:44.505100 ignition[1720]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 02:14:44.539787 systemd-networkd[1725]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:14:44.505152 ignition[1720]: parsing config with SHA512: a806b3fb2f2b129150210ca1625c65877b584841e2231a426e3db90f6c6d3236c13b4c5cb7b3b304675b4142209c04438a76d7d347b640a697e86db7515845f9 Jul 7 02:14:44.510301 ignition[1720]: fetch-offline: fetch-offline passed Jul 7 02:14:44.510305 ignition[1720]: POST message to Packet Timeline Jul 7 02:14:44.510311 ignition[1720]: POST Status error: resource requires networking Jul 7 02:14:44.510366 ignition[1720]: Ignition finished successfully Jul 7 02:14:44.576699 ignition[1782]: Ignition 2.21.0 Jul 7 02:14:44.576704 ignition[1782]: Stage: kargs Jul 7 02:14:44.576949 ignition[1782]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:14:44.576958 ignition[1782]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 02:14:44.581488 ignition[1782]: kargs: kargs passed Jul 7 02:14:44.581498 ignition[1782]: POST message to Packet Timeline Jul 7 02:14:44.581838 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 02:14:44.586410 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37757->[::1]:53: read: connection refused Jul 7 02:14:44.786531 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 02:14:44.787163 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47820->[::1]:53: read: connection refused Jul 7 02:14:45.125699 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 7 02:14:45.128724 systemd-networkd[1725]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:14:45.187621 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 02:14:45.188018 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59083->[::1]:53: read: connection refused Jul 7 02:14:45.738700 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 7 02:14:45.741429 systemd-networkd[1725]: eno1: Link UP Jul 7 02:14:45.741561 systemd-networkd[1725]: eno2: Link UP Jul 7 02:14:45.741673 systemd-networkd[1725]: enP1p1s0f0np0: Link UP Jul 7 02:14:45.741890 systemd-networkd[1725]: enP1p1s0f0np0: Gained carrier Jul 7 02:14:45.758883 systemd-networkd[1725]: enP1p1s0f1np1: Link UP Jul 7 02:14:45.760177 systemd-networkd[1725]: enP1p1s0f1np1: Gained carrier Jul 7 02:14:45.802728 systemd-networkd[1725]: enP1p1s0f0np0: DHCPv4 address 147.28.151.230/30, gateway 147.28.151.229 acquired from 147.28.144.140 Jul 7 02:14:45.988196 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 02:14:45.988892 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35800->[::1]:53: read: connection refused Jul 7 02:14:46.964762 systemd-networkd[1725]: enP1p1s0f0np0: Gained IPv6LL Jul 7 02:14:47.476746 systemd-networkd[1725]: enP1p1s0f1np1: Gained IPv6LL Jul 7 02:14:47.589548 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 02:14:47.589964 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58862->[::1]:53: read: connection refused Jul 7 02:14:50.792595 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 02:14:51.356895 ignition[1782]: GET result: OK Jul 7 02:14:51.710672 ignition[1782]: Ignition finished successfully Jul 7 02:14:51.714812 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 02:14:51.717101 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 02:14:51.760917 ignition[1821]: Ignition 2.21.0 Jul 7 02:14:51.760925 ignition[1821]: Stage: disks Jul 7 02:14:51.761065 ignition[1821]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:14:51.761074 ignition[1821]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 02:14:51.761938 ignition[1821]: disks: disks passed Jul 7 02:14:51.761942 ignition[1821]: POST message to Packet Timeline Jul 7 02:14:51.761960 ignition[1821]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 02:14:52.840104 ignition[1821]: GET result: OK Jul 7 02:14:53.306545 ignition[1821]: Ignition finished successfully Jul 7 02:14:53.309784 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 02:14:53.315038 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 02:14:53.322607 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 02:14:53.330468 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 02:14:53.338917 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 02:14:53.347751 systemd[1]: Reached target basic.target - Basic System. Jul 7 02:14:53.357982 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 02:14:53.396075 systemd-fsck[1843]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 02:14:53.400756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 02:14:53.407574 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 02:14:53.504612 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 02:14:53.509628 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 7 02:14:53.514982 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 02:14:53.526117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 02:14:53.553273 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 02:14:53.562688 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (1856) Jul 7 02:14:53.562711 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 02:14:53.562722 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 02:14:53.562732 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 02:14:53.630199 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 02:14:53.655972 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 02:14:53.667763 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 02:14:53.667793 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 02:14:53.676032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 02:14:53.690758 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 02:14:53.719520 coreos-metadata[1874]: Jul 07 02:14:53.705 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 02:14:53.730940 coreos-metadata[1873]: Jul 07 02:14:53.705 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 02:14:53.704947 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 02:14:53.751026 initrd-setup-root[1890]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 02:14:53.757631 initrd-setup-root[1897]: cut: /sysroot/etc/group: No such file or directory Jul 7 02:14:53.764125 initrd-setup-root[1904]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 02:14:53.770646 initrd-setup-root[1911]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 02:14:53.841840 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 02:14:53.854093 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 02:14:53.876254 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 02:14:53.884689 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 02:14:53.909507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 02:14:53.919554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 02:14:53.934038 ignition[1986]: INFO : Ignition 2.21.0 Jul 7 02:14:53.934038 ignition[1986]: INFO : Stage: mount Jul 7 02:14:53.945272 ignition[1986]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:14:53.945272 ignition[1986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 02:14:53.945272 ignition[1986]: INFO : mount: mount passed Jul 7 02:14:53.945272 ignition[1986]: INFO : POST message to Packet Timeline Jul 7 02:14:53.945272 ignition[1986]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 02:14:54.254670 coreos-metadata[1873]: Jul 07 02:14:54.254 INFO Fetch successful Jul 7 02:14:54.296361 coreos-metadata[1873]: Jul 07 02:14:54.296 INFO wrote hostname ci-4372.0.1-a-e89e5d604b to /sysroot/etc/hostname Jul 7 02:14:54.300723 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 02:14:54.493797 ignition[1986]: INFO : GET result: OK Jul 7 02:14:54.770291 coreos-metadata[1874]: Jul 07 02:14:54.770 INFO Fetch successful Jul 7 02:14:54.815521 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 02:14:54.815603 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 02:14:54.841671 ignition[1986]: INFO : Ignition finished successfully Jul 7 02:14:54.844797 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 02:14:54.853834 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 02:14:54.886750 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 02:14:54.928703 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (2012) Jul 7 02:14:54.928739 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 02:14:54.943324 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 02:14:54.956539 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 02:14:54.965525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 02:14:55.003942 ignition[2030]: INFO : Ignition 2.21.0 Jul 7 02:14:55.003942 ignition[2030]: INFO : Stage: files Jul 7 02:14:55.014029 ignition[2030]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:14:55.014029 ignition[2030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 02:14:55.014029 ignition[2030]: DEBUG : files: compiled without relabeling support, skipping Jul 7 02:14:55.014029 ignition[2030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 02:14:55.014029 ignition[2030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 02:14:55.014029 ignition[2030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 02:14:55.014029 ignition[2030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 02:14:55.014029 ignition[2030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 02:14:55.014029 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 02:14:55.014029 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 02:14:55.010543 unknown[2030]: wrote ssh authorized keys file for user: core Jul 7 02:14:55.126634 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 02:14:55.213553 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 02:14:55.224588 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 02:14:55.522218 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 02:14:55.894715 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 02:14:55.894715 ignition[2030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 02:14:55.919597 ignition[2030]: INFO : files: files passed Jul 7 02:14:55.919597 ignition[2030]: INFO : POST message to Packet Timeline Jul 7 02:14:55.919597 ignition[2030]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 02:14:56.520044 ignition[2030]: INFO : GET result: OK Jul 7 02:14:56.846474 ignition[2030]: INFO : Ignition finished successfully Jul 7 02:14:56.849157 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 02:14:56.859891 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 02:14:56.883306 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 02:14:56.901967 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 02:14:56.902144 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 02:14:56.920395 initrd-setup-root-after-ignition[2075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:14:56.920395 initrd-setup-root-after-ignition[2075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:14:56.914928 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 02:14:56.971559 initrd-setup-root-after-ignition[2079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:14:56.927889 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 02:14:56.944527 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 02:14:57.003069 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 02:14:57.004756 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 02:14:57.014842 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 02:14:57.031087 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 02:14:57.042141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 02:14:57.043172 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 02:14:57.077396 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 02:14:57.090158 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 02:14:57.118768 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:14:57.130601 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:14:57.136533 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 02:14:57.148130 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 02:14:57.148238 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 02:14:57.159733 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 02:14:57.171008 systemd[1]: Stopped target basic.target - Basic System. Jul 7 02:14:57.182433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 02:14:57.193789 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 02:14:57.204977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 02:14:57.216185 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 02:14:57.227418 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 02:14:57.238654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 02:14:57.249878 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 02:14:57.261157 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 02:14:57.278014 systemd[1]: Stopped target swap.target - Swaps. Jul 7 02:14:57.289304 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 02:14:57.289402 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 02:14:57.306307 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:14:57.317417 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:14:57.328423 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 02:14:57.331754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:14:57.339722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 02:14:57.339830 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 02:14:57.351110 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 02:14:57.351208 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 02:14:57.362356 systemd[1]: Stopped target paths.target - Path Units. Jul 7 02:14:57.379101 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 02:14:57.380717 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:14:57.390405 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 02:14:57.401773 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 02:14:57.413205 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 02:14:57.413301 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 02:14:57.424704 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 02:14:57.424803 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 02:14:57.436082 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 02:14:57.534078 ignition[2100]: INFO : Ignition 2.21.0 Jul 7 02:14:57.534078 ignition[2100]: INFO : Stage: umount Jul 7 02:14:57.534078 ignition[2100]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:14:57.534078 ignition[2100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 02:14:57.534078 ignition[2100]: INFO : umount: umount passed Jul 7 02:14:57.534078 ignition[2100]: INFO : POST message to Packet Timeline Jul 7 02:14:57.534078 ignition[2100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 02:14:57.436182 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 02:14:57.447526 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 02:14:57.447612 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 02:14:57.464554 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 02:14:57.464639 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 02:14:57.476617 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 02:14:57.487276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 02:14:57.487380 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:14:57.505325 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 02:14:57.516085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 02:14:57.516190 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:14:57.528394 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 02:14:57.528481 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 02:14:57.542545 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 02:14:57.543447 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 02:14:57.544127 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 02:14:57.554807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 02:14:57.554884 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 02:14:58.148580 ignition[2100]: INFO : GET result: OK Jul 7 02:14:58.423302 ignition[2100]: INFO : Ignition finished successfully Jul 7 02:14:58.425977 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 02:14:58.426745 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 02:14:58.433689 systemd[1]: Stopped target network.target - Network. Jul 7 02:14:58.442564 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 02:14:58.442638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 02:14:58.452051 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 02:14:58.452085 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 02:14:58.461510 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 02:14:58.461559 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 02:14:58.471046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 02:14:58.471080 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 02:14:58.480693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 02:14:58.480770 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 02:14:58.490552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 02:14:58.500289 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 02:14:58.510245 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 02:14:58.511710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 02:14:58.524184 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 02:14:58.525147 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 02:14:58.525370 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:14:58.538247 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 02:14:58.538544 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 02:14:58.539720 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 02:14:58.545695 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 02:14:58.546523 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 02:14:58.555050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 02:14:58.555157 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:14:58.567183 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 02:14:58.575502 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 02:14:58.575557 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 02:14:58.586094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 02:14:58.586137 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:14:58.596962 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 02:14:58.597014 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 02:14:58.612892 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:14:58.625198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 02:14:58.632015 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 02:14:58.633729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:14:58.642607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 02:14:58.642888 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 02:14:58.658157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 02:14:58.658225 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:14:58.669327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 02:14:58.669387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 02:14:58.686391 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 02:14:58.686444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 02:14:58.697606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 02:14:58.697661 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:14:58.715826 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 02:14:58.726666 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 02:14:58.726738 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 02:14:58.738436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 02:14:58.738474 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:14:58.750477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 02:14:58.750532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:14:58.769918 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 02:14:58.769983 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 02:14:58.770029 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 02:14:58.770394 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 02:14:58.770467 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 02:14:59.291604 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 02:14:59.292761 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 02:14:59.303506 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 02:14:59.314596 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 02:14:59.341823 systemd[1]: Switching root. Jul 7 02:14:59.412509 systemd-journald[908]: Journal stopped Jul 7 02:15:01.612736 systemd-journald[908]: Received SIGTERM from PID 1 (systemd). Jul 7 02:15:01.612764 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 02:15:01.612774 kernel: SELinux: policy capability open_perms=1 Jul 7 02:15:01.612782 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 02:15:01.612789 kernel: SELinux: policy capability always_check_network=0 Jul 7 02:15:01.612796 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 02:15:01.612804 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 02:15:01.612814 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 02:15:01.612821 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 02:15:01.612828 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 02:15:01.612842 kernel: audit: type=1403 audit(1751854499.620:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 02:15:01.612851 systemd[1]: Successfully loaded SELinux policy in 141.966ms. Jul 7 02:15:01.612860 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.678ms. Jul 7 02:15:01.612869 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 02:15:01.612880 systemd[1]: Detected architecture arm64. Jul 7 02:15:01.612888 systemd[1]: Detected first boot. Jul 7 02:15:01.612896 systemd[1]: Hostname set to . Jul 7 02:15:01.612905 systemd[1]: Initializing machine ID from random generator. Jul 7 02:15:01.612914 zram_generator::config[2171]: No configuration found. Jul 7 02:15:01.612924 systemd[1]: Populated /etc with preset unit settings. Jul 7 02:15:01.612933 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 02:15:01.612941 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 02:15:01.612950 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 02:15:01.612958 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 02:15:01.612967 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 02:15:01.612975 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 02:15:01.612986 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 02:15:01.612994 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 02:15:01.613003 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 02:15:01.613012 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 02:15:01.613020 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 02:15:01.613029 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 02:15:01.613038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:15:01.613046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:15:01.613056 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 02:15:01.613065 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 02:15:01.613074 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 02:15:01.613083 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 02:15:01.613091 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 02:15:01.613100 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:15:01.613111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:15:01.613119 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 02:15:01.613130 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 02:15:01.613139 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 02:15:01.613148 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 02:15:01.613157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:15:01.613166 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 02:15:01.613174 systemd[1]: Reached target slices.target - Slice Units. Jul 7 02:15:01.613183 systemd[1]: Reached target swap.target - Swaps. Jul 7 02:15:01.613193 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 02:15:01.613202 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 02:15:01.613211 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 02:15:01.613222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:15:01.613231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 02:15:01.613241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:15:01.613250 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 02:15:01.613259 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 02:15:01.613268 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 02:15:01.613277 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 02:15:01.613286 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 02:15:01.613295 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 02:15:01.613304 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 02:15:01.613315 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 02:15:01.613324 systemd[1]: Reached target machines.target - Containers. Jul 7 02:15:01.613333 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 02:15:01.613342 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:15:01.613351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 02:15:01.613360 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 02:15:01.613369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 02:15:01.613378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 02:15:01.613387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 02:15:01.613397 kernel: ACPI: bus type drm_connector registered Jul 7 02:15:01.613406 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 02:15:01.613415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 02:15:01.613424 kernel: fuse: init (API version 7.41) Jul 7 02:15:01.613431 kernel: loop: module loaded Jul 7 02:15:01.613440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 02:15:01.613449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 02:15:01.613458 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 02:15:01.613468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 02:15:01.613477 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 02:15:01.613486 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 02:15:01.613495 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 02:15:01.613504 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 02:15:01.613513 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 02:15:01.613522 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 02:15:01.613549 systemd-journald[2283]: Collecting audit messages is disabled. Jul 7 02:15:01.613570 systemd-journald[2283]: Journal started Jul 7 02:15:01.613588 systemd-journald[2283]: Runtime Journal (/run/log/journal/e3c737b0d34c4544bfcfc17118e019d4) is 8M, max 4G, 3.9G free. Jul 7 02:15:00.172259 systemd[1]: Queued start job for default target multi-user.target. Jul 7 02:15:00.196280 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 02:15:00.196626 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 02:15:00.196932 systemd[1]: systemd-journald.service: Consumed 3.430s CPU time. Jul 7 02:15:01.636702 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 02:15:01.672696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 02:15:01.695707 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 02:15:01.695724 systemd[1]: Stopped verity-setup.service. Jul 7 02:15:01.721698 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 02:15:01.727046 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 02:15:01.732636 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 02:15:01.738144 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 02:15:01.743727 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 02:15:01.749260 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 02:15:01.754695 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 02:15:01.761714 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 02:15:01.769774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:15:01.775390 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 02:15:01.775563 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 02:15:01.780980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 02:15:01.781140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 02:15:01.786598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 02:15:01.786782 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 02:15:01.792054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 02:15:01.792793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 02:15:01.798130 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 02:15:01.798291 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 02:15:01.803618 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 02:15:01.804864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 02:15:01.810182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 02:15:01.818294 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 02:15:01.823549 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 02:15:01.829883 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 02:15:01.843448 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 02:15:01.849997 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 02:15:01.871417 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 02:15:01.876442 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 02:15:01.876471 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 02:15:01.882128 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 02:15:01.888072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 02:15:01.892954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:15:01.894334 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 02:15:01.900068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 02:15:01.904973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 02:15:01.905995 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 02:15:01.910994 systemd-journald[2283]: Time spent on flushing to /var/log/journal/e3c737b0d34c4544bfcfc17118e019d4 is 24.911ms for 2527 entries. Jul 7 02:15:01.910994 systemd-journald[2283]: System Journal (/var/log/journal/e3c737b0d34c4544bfcfc17118e019d4) is 8M, max 195.6M, 187.6M free. Jul 7 02:15:01.953101 systemd-journald[2283]: Received client request to flush runtime journal. Jul 7 02:15:01.953144 kernel: loop0: detected capacity change from 0 to 107312 Jul 7 02:15:01.911010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 02:15:01.912173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 02:15:01.929117 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 02:15:01.934860 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 02:15:01.941026 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 02:15:01.956788 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 02:15:01.957689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 02:15:01.971733 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 02:15:01.976611 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:15:01.982023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 02:15:01.986744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:15:01.991614 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 02:15:01.999417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 02:15:02.005078 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 02:15:02.028694 kernel: loop1: detected capacity change from 0 to 8 Jul 7 02:15:02.032994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 02:15:02.040541 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 02:15:02.041166 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 02:15:02.054396 systemd-tmpfiles[2346]: ACLs are not supported, ignoring. Jul 7 02:15:02.054408 systemd-tmpfiles[2346]: ACLs are not supported, ignoring. Jul 7 02:15:02.058248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:15:02.077690 kernel: loop2: detected capacity change from 0 to 138376 Jul 7 02:15:02.123698 kernel: loop3: detected capacity change from 0 to 207008 Jul 7 02:15:02.149319 ldconfig[2316]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 02:15:02.150904 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 02:15:02.189699 kernel: loop4: detected capacity change from 0 to 107312 Jul 7 02:15:02.205693 kernel: loop5: detected capacity change from 0 to 8 Jul 7 02:15:02.217697 kernel: loop6: detected capacity change from 0 to 138376 Jul 7 02:15:02.235695 kernel: loop7: detected capacity change from 0 to 207008 Jul 7 02:15:02.242283 (sd-merge)[2359]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 7 02:15:02.242744 (sd-merge)[2359]: Merged extensions into '/usr'. Jul 7 02:15:02.245941 systemd[1]: Reload requested from client PID 2326 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 02:15:02.245952 systemd[1]: Reloading... Jul 7 02:15:02.292690 zram_generator::config[2384]: No configuration found. Jul 7 02:15:02.370938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:15:02.444166 systemd[1]: Reloading finished in 197 ms. Jul 7 02:15:02.478190 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 02:15:02.483034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 02:15:02.511097 systemd[1]: Starting ensure-sysext.service... Jul 7 02:15:02.516967 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 02:15:02.523577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:15:02.534407 systemd[1]: Reload requested from client PID 2438 ('systemctl') (unit ensure-sysext.service)... Jul 7 02:15:02.534420 systemd[1]: Reloading... Jul 7 02:15:02.536189 systemd-tmpfiles[2439]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 02:15:02.536221 systemd-tmpfiles[2439]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 02:15:02.536456 systemd-tmpfiles[2439]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 02:15:02.536641 systemd-tmpfiles[2439]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 02:15:02.537244 systemd-tmpfiles[2439]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 02:15:02.537440 systemd-tmpfiles[2439]: ACLs are not supported, ignoring. Jul 7 02:15:02.537483 systemd-tmpfiles[2439]: ACLs are not supported, ignoring. Jul 7 02:15:02.540161 systemd-tmpfiles[2439]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 02:15:02.540169 systemd-tmpfiles[2439]: Skipping /boot Jul 7 02:15:02.548875 systemd-tmpfiles[2439]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 02:15:02.548884 systemd-tmpfiles[2439]: Skipping /boot Jul 7 02:15:02.552161 systemd-udevd[2440]: Using default interface naming scheme 'v255'. Jul 7 02:15:02.582690 zram_generator::config[2474]: No configuration found. Jul 7 02:15:02.631697 kernel: IPMI message handler: version 39.2 Jul 7 02:15:02.641695 kernel: ipmi device interface Jul 7 02:15:02.653809 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 7 02:15:02.653874 kernel: MACsec IEEE 802.1AE Jul 7 02:15:02.653920 kernel: ipmi_si: IPMI System Interface driver Jul 7 02:15:02.674424 kernel: ipmi_si: Unable to find any System Interface(s) Jul 7 02:15:02.676532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:15:02.768570 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 02:15:02.768701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 7 02:15:02.773393 systemd[1]: Reloading finished in 238 ms. Jul 7 02:15:02.792967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:15:02.811330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:15:02.833433 systemd[1]: Finished ensure-sysext.service. Jul 7 02:15:02.854628 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 02:15:02.878520 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 02:15:02.883336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:15:02.884225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 02:15:02.889903 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 02:15:02.895563 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 02:15:02.901212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 02:15:02.906076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:15:02.906934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 02:15:02.911746 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 02:15:02.912859 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 02:15:02.919040 augenrules[2707]: No rules Jul 7 02:15:02.919388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 02:15:02.925855 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 02:15:02.932088 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 02:15:02.937501 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 02:15:02.942880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:15:02.948053 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 02:15:02.948261 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 02:15:02.952961 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 02:15:02.958709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 02:15:02.958871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 02:15:02.963325 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 02:15:02.963476 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 02:15:02.967897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 02:15:02.968040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 02:15:02.972724 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 02:15:02.972877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 02:15:02.978297 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 02:15:02.983096 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 02:15:02.989585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:15:02.999561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 02:15:02.999680 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 02:15:03.000881 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 02:15:03.023120 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 02:15:03.027617 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 02:15:03.028034 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 02:15:03.034550 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 02:15:03.060268 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 02:15:03.122361 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 02:15:03.124562 systemd-resolved[2714]: Positive Trust Anchors: Jul 7 02:15:03.124574 systemd-resolved[2714]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 02:15:03.124608 systemd-resolved[2714]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 02:15:03.127049 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 02:15:03.128715 systemd-resolved[2714]: Using system hostname 'ci-4372.0.1-a-e89e5d604b'. Jul 7 02:15:03.131554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 02:15:03.135766 systemd-networkd[2713]: lo: Link UP Jul 7 02:15:03.135772 systemd-networkd[2713]: lo: Gained carrier Jul 7 02:15:03.136903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:15:03.139227 systemd-networkd[2713]: bond0: netdev ready Jul 7 02:15:03.141280 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 02:15:03.145677 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 02:15:03.148259 systemd-networkd[2713]: Enumeration completed Jul 7 02:15:03.150054 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 02:15:03.154568 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 02:15:03.156501 systemd-networkd[2713]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:95:bd:80.network. Jul 7 02:15:03.158999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 02:15:03.163331 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 02:15:03.167645 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 02:15:03.167668 systemd[1]: Reached target paths.target - Path Units. Jul 7 02:15:03.171911 systemd[1]: Reached target timers.target - Timer Units. Jul 7 02:15:03.176894 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 02:15:03.182554 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 02:15:03.188858 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 02:15:03.202533 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 02:15:03.207309 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 02:15:03.212154 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 02:15:03.216677 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 02:15:03.221203 systemd[1]: Reached target network.target - Network. Jul 7 02:15:03.225588 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 02:15:03.229943 systemd[1]: Reached target basic.target - Basic System. Jul 7 02:15:03.234295 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 02:15:03.234315 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 02:15:03.235383 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 02:15:03.259363 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 02:15:03.264931 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 02:15:03.270553 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 02:15:03.276082 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 02:15:03.280855 coreos-metadata[2757]: Jul 07 02:15:03.280 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 02:15:03.281653 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 02:15:03.283414 coreos-metadata[2757]: Jul 07 02:15:03.283 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 02:15:03.286085 jq[2762]: false Jul 7 02:15:03.286189 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 02:15:03.287299 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 02:15:03.292943 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 02:15:03.297863 extend-filesystems[2764]: Found /dev/nvme0n1p6 Jul 7 02:15:03.302858 extend-filesystems[2764]: Found /dev/nvme0n1p9 Jul 7 02:15:03.298517 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 02:15:03.312118 extend-filesystems[2764]: Checking size of /dev/nvme0n1p9 Jul 7 02:15:03.308494 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 02:15:03.321315 extend-filesystems[2764]: Resized partition /dev/nvme0n1p9 Jul 7 02:15:03.343560 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jul 7 02:15:03.321021 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 02:15:03.343813 extend-filesystems[2786]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 02:15:03.339564 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 02:15:03.349306 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 02:15:03.358129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 02:15:03.358676 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 02:15:03.359366 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 02:15:03.365356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 02:15:03.371759 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 02:15:03.373162 jq[2799]: true Jul 7 02:15:03.377087 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 02:15:03.377266 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 02:15:03.377504 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 02:15:03.377687 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 02:15:03.382082 systemd-logind[2788]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 02:15:03.382504 systemd-logind[2788]: New seat seat0. Jul 7 02:15:03.383326 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 02:15:03.388731 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 02:15:03.388920 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 02:15:03.396696 update_engine[2798]: I20250707 02:15:03.396567 2798 main.cc:92] Flatcar Update Engine starting Jul 7 02:15:03.398598 (ntainerd)[2803]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 02:15:03.400601 jq[2802]: true Jul 7 02:15:03.407722 tar[2801]: linux-arm64/LICENSE Jul 7 02:15:03.407896 tar[2801]: linux-arm64/helm Jul 7 02:15:03.414331 dbus-daemon[2758]: [system] SELinux support is enabled Jul 7 02:15:03.414713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 02:15:03.417805 update_engine[2798]: I20250707 02:15:03.417771 2798 update_check_scheduler.cc:74] Next update check in 11m57s Jul 7 02:15:03.425120 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 02:15:03.425145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 02:15:03.425687 dbus-daemon[2758]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 02:15:03.429764 bash[2828]: Updated "/home/core/.ssh/authorized_keys" Jul 7 02:15:03.430135 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 02:15:03.430152 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 02:15:03.435328 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 02:15:03.441206 systemd[1]: Started update-engine.service - Update Engine. Jul 7 02:15:03.447988 systemd[1]: Starting sshkeys.service... Jul 7 02:15:03.473107 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 02:15:03.482621 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 02:15:03.488562 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 02:15:03.507517 locksmithd[2831]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 02:15:03.508294 coreos-metadata[2839]: Jul 07 02:15:03.508 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 02:15:03.509511 coreos-metadata[2839]: Jul 07 02:15:03.509 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 02:15:03.556631 containerd[2803]: time="2025-07-07T02:15:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 02:15:03.557731 containerd[2803]: time="2025-07-07T02:15:03.557705240Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 02:15:03.567074 containerd[2803]: time="2025-07-07T02:15:03.567044320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.96µs" Jul 7 02:15:03.567093 containerd[2803]: time="2025-07-07T02:15:03.567074960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 02:15:03.567110 containerd[2803]: time="2025-07-07T02:15:03.567093360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 02:15:03.567278 containerd[2803]: time="2025-07-07T02:15:03.567264600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 02:15:03.567296 containerd[2803]: time="2025-07-07T02:15:03.567281640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 02:15:03.567317 containerd[2803]: time="2025-07-07T02:15:03.567303800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567364 containerd[2803]: time="2025-07-07T02:15:03.567351520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567382 containerd[2803]: time="2025-07-07T02:15:03.567364360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567604 containerd[2803]: time="2025-07-07T02:15:03.567589280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567627 containerd[2803]: time="2025-07-07T02:15:03.567604280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567627 containerd[2803]: time="2025-07-07T02:15:03.567615160Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567627 containerd[2803]: time="2025-07-07T02:15:03.567623880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567728 containerd[2803]: time="2025-07-07T02:15:03.567706560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567895 containerd[2803]: time="2025-07-07T02:15:03.567881760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567922 containerd[2803]: time="2025-07-07T02:15:03.567910680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 02:15:03.567943 containerd[2803]: time="2025-07-07T02:15:03.567921600Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 02:15:03.567961 containerd[2803]: time="2025-07-07T02:15:03.567947880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 02:15:03.568146 containerd[2803]: time="2025-07-07T02:15:03.568135960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 02:15:03.568211 containerd[2803]: time="2025-07-07T02:15:03.568200680Z" level=info msg="metadata content store policy set" policy=shared Jul 7 02:15:03.576120 containerd[2803]: time="2025-07-07T02:15:03.576102480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 02:15:03.576158 containerd[2803]: time="2025-07-07T02:15:03.576146320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 02:15:03.576177 containerd[2803]: time="2025-07-07T02:15:03.576160520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 02:15:03.576177 containerd[2803]: time="2025-07-07T02:15:03.576173560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 02:15:03.576208 containerd[2803]: time="2025-07-07T02:15:03.576184600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 02:15:03.576208 containerd[2803]: time="2025-07-07T02:15:03.576197440Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 02:15:03.576251 containerd[2803]: time="2025-07-07T02:15:03.576208000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 02:15:03.576251 containerd[2803]: time="2025-07-07T02:15:03.576218480Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 02:15:03.576251 containerd[2803]: time="2025-07-07T02:15:03.576228880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 02:15:03.576251 containerd[2803]: time="2025-07-07T02:15:03.576238600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 02:15:03.576251 containerd[2803]: time="2025-07-07T02:15:03.576247480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 02:15:03.576381 containerd[2803]: time="2025-07-07T02:15:03.576259200Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 02:15:03.576381 containerd[2803]: time="2025-07-07T02:15:03.576370920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 02:15:03.576445 containerd[2803]: time="2025-07-07T02:15:03.576390000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 02:15:03.576445 containerd[2803]: time="2025-07-07T02:15:03.576405160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 02:15:03.576445 containerd[2803]: time="2025-07-07T02:15:03.576414640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 02:15:03.576445 containerd[2803]: time="2025-07-07T02:15:03.576423840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 02:15:03.576445 containerd[2803]: time="2025-07-07T02:15:03.576434160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 02:15:03.576445 containerd[2803]: time="2025-07-07T02:15:03.576444080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 02:15:03.576554 containerd[2803]: time="2025-07-07T02:15:03.576453560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 02:15:03.576554 containerd[2803]: time="2025-07-07T02:15:03.576463920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 02:15:03.576554 containerd[2803]: time="2025-07-07T02:15:03.576474320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 02:15:03.576554 containerd[2803]: time="2025-07-07T02:15:03.576483920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 02:15:03.576674 containerd[2803]: time="2025-07-07T02:15:03.576662680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 02:15:03.576697 containerd[2803]: time="2025-07-07T02:15:03.576679960Z" level=info msg="Start snapshots syncer" Jul 7 02:15:03.576719 containerd[2803]: time="2025-07-07T02:15:03.576709960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 02:15:03.576928 containerd[2803]: time="2025-07-07T02:15:03.576900560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 02:15:03.577002 containerd[2803]: time="2025-07-07T02:15:03.576940800Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 02:15:03.577021 containerd[2803]: time="2025-07-07T02:15:03.577003880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 02:15:03.577126 containerd[2803]: time="2025-07-07T02:15:03.577113120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 02:15:03.577144 containerd[2803]: time="2025-07-07T02:15:03.577134520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 02:15:03.577165 containerd[2803]: time="2025-07-07T02:15:03.577144200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 02:15:03.577165 containerd[2803]: time="2025-07-07T02:15:03.577155440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 02:15:03.577199 containerd[2803]: time="2025-07-07T02:15:03.577165760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 02:15:03.577199 containerd[2803]: time="2025-07-07T02:15:03.577179480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 02:15:03.577199 containerd[2803]: time="2025-07-07T02:15:03.577190040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 02:15:03.577246 containerd[2803]: time="2025-07-07T02:15:03.577213000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 02:15:03.577246 containerd[2803]: time="2025-07-07T02:15:03.577223280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 02:15:03.577246 containerd[2803]: time="2025-07-07T02:15:03.577233080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 02:15:03.577292 containerd[2803]: time="2025-07-07T02:15:03.577266440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 02:15:03.577292 containerd[2803]: time="2025-07-07T02:15:03.577278800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 02:15:03.577292 containerd[2803]: time="2025-07-07T02:15:03.577286440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 02:15:03.577341 containerd[2803]: time="2025-07-07T02:15:03.577295720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 02:15:03.577341 containerd[2803]: time="2025-07-07T02:15:03.577302760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 02:15:03.577341 containerd[2803]: time="2025-07-07T02:15:03.577311400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 02:15:03.577341 containerd[2803]: time="2025-07-07T02:15:03.577320880Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 02:15:03.577405 containerd[2803]: time="2025-07-07T02:15:03.577397760Z" level=info msg="runtime interface created" Jul 7 02:15:03.577405 containerd[2803]: time="2025-07-07T02:15:03.577402760Z" level=info msg="created NRI interface" Jul 7 02:15:03.577437 containerd[2803]: time="2025-07-07T02:15:03.577410800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 02:15:03.577437 containerd[2803]: time="2025-07-07T02:15:03.577420960Z" level=info msg="Connect containerd service" Jul 7 02:15:03.577468 containerd[2803]: time="2025-07-07T02:15:03.577443880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 02:15:03.578083 containerd[2803]: time="2025-07-07T02:15:03.578064280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 02:15:03.663617 containerd[2803]: time="2025-07-07T02:15:03.663567640Z" level=info msg="Start subscribing containerd event" Jul 7 02:15:03.663654 containerd[2803]: time="2025-07-07T02:15:03.663632440Z" level=info msg="Start recovering state" Jul 7 02:15:03.663735 containerd[2803]: time="2025-07-07T02:15:03.663722720Z" level=info msg="Start event monitor" Jul 7 02:15:03.663756 containerd[2803]: time="2025-07-07T02:15:03.663739400Z" level=info msg="Start cni network conf syncer for default" Jul 7 02:15:03.663756 containerd[2803]: time="2025-07-07T02:15:03.663747200Z" level=info msg="Start streaming server" Jul 7 02:15:03.663788 containerd[2803]: time="2025-07-07T02:15:03.663756200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 02:15:03.663788 containerd[2803]: time="2025-07-07T02:15:03.663763160Z" level=info msg="runtime interface starting up..." Jul 7 02:15:03.663788 containerd[2803]: time="2025-07-07T02:15:03.663768200Z" level=info msg="starting plugins..." Jul 7 02:15:03.663788 containerd[2803]: time="2025-07-07T02:15:03.663780440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 02:15:03.663882 containerd[2803]: time="2025-07-07T02:15:03.663858760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 02:15:03.663938 containerd[2803]: time="2025-07-07T02:15:03.663909360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 02:15:03.663991 containerd[2803]: time="2025-07-07T02:15:03.663983000Z" level=info msg="containerd successfully booted in 0.107714s" Jul 7 02:15:03.664039 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 02:15:03.740941 tar[2801]: linux-arm64/README.md Jul 7 02:15:03.770720 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 02:15:03.865700 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jul 7 02:15:03.881811 extend-filesystems[2786]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 02:15:03.881811 extend-filesystems[2786]: old_desc_blocks = 1, new_desc_blocks = 112 Jul 7 02:15:03.881811 extend-filesystems[2786]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jul 7 02:15:03.912791 extend-filesystems[2764]: Resized filesystem in /dev/nvme0n1p9 Jul 7 02:15:03.884176 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 02:15:03.884485 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 02:15:03.897906 systemd[1]: extend-filesystems.service: Consumed 212ms CPU time, 68.9M memory peak. Jul 7 02:15:04.128418 sshd_keygen[2790]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 02:15:04.147202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 02:15:04.154549 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 02:15:04.188384 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 02:15:04.188594 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 02:15:04.195606 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 02:15:04.227314 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 02:15:04.234190 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 02:15:04.240827 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 02:15:04.246524 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 02:15:04.283538 coreos-metadata[2757]: Jul 07 02:15:04.283 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 02:15:04.283968 coreos-metadata[2757]: Jul 07 02:15:04.283 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 02:15:04.496699 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 7 02:15:04.509640 coreos-metadata[2839]: Jul 07 02:15:04.509 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 02:15:04.510109 coreos-metadata[2839]: Jul 07 02:15:04.510 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 02:15:04.513691 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jul 7 02:15:04.514756 systemd-networkd[2713]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:95:bd:81.network. Jul 7 02:15:05.144701 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 7 02:15:05.161695 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Jul 7 02:15:05.161817 systemd-networkd[2713]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 7 02:15:05.162854 systemd-networkd[2713]: enP1p1s0f0np0: Link UP Jul 7 02:15:05.163100 systemd-networkd[2713]: enP1p1s0f0np0: Gained carrier Jul 7 02:15:05.163936 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 02:15:05.181428 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 02:15:05.182758 systemd-networkd[2713]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:95:bd:80.network. Jul 7 02:15:05.183034 systemd-networkd[2713]: enP1p1s0f1np1: Link UP Jul 7 02:15:05.183229 systemd-networkd[2713]: enP1p1s0f1np1: Gained carrier Jul 7 02:15:05.205963 systemd-networkd[2713]: bond0: Link UP Jul 7 02:15:05.206244 systemd-networkd[2713]: bond0: Gained carrier Jul 7 02:15:05.206411 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:05.206995 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:05.207251 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:05.207389 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:05.288232 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 7 02:15:05.288264 kernel: bond0: active interface up! Jul 7 02:15:05.411690 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 7 02:15:06.284074 coreos-metadata[2757]: Jul 07 02:15:06.284 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 7 02:15:06.510118 coreos-metadata[2839]: Jul 07 02:15:06.510 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 7 02:15:06.549035 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:06.677738 systemd-networkd[2713]: bond0: Gained IPv6LL Jul 7 02:15:06.678125 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:06.680043 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 02:15:06.685922 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 02:15:06.693079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:06.721147 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 02:15:06.743658 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 02:15:07.339505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:07.345832 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:15:07.698091 kubelet[2918]: E0707 02:15:07.698028 2918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:15:07.700634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:15:07.700777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:15:07.701811 systemd[1]: kubelet.service: Consumed 721ms CPU time, 263.3M memory peak. Jul 7 02:15:08.695992 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jul 7 02:15:08.696320 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jul 7 02:15:08.876818 coreos-metadata[2839]: Jul 07 02:15:08.876 INFO Fetch successful Jul 7 02:15:08.891942 coreos-metadata[2757]: Jul 07 02:15:08.891 INFO Fetch successful Jul 7 02:15:08.934757 unknown[2839]: wrote ssh authorized keys file for user: core Jul 7 02:15:08.990230 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 02:15:08.997170 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 7 02:15:08.997595 update-ssh-keys[2951]: Updated "/home/core/.ssh/authorized_keys" Jul 7 02:15:09.002861 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 02:15:09.009643 systemd[1]: Finished sshkeys.service. Jul 7 02:15:09.065193 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 02:15:09.071341 systemd[1]: Started sshd@0-147.28.151.230:22-139.178.68.195:45016.service - OpenSSH per-connection server daemon (139.178.68.195:45016). Jul 7 02:15:09.299366 login[2896]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 7 02:15:09.299868 login[2895]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:09.305478 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 02:15:09.306532 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 02:15:09.311657 systemd-logind[2788]: New session 1 of user core. Jul 7 02:15:09.317888 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 02:15:09.321383 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 02:15:09.324331 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 7 02:15:09.324734 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 02:15:09.327181 (systemd)[2973]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 02:15:09.329310 systemd-logind[2788]: New session c1 of user core. Jul 7 02:15:09.452310 systemd[2973]: Queued start job for default target default.target. Jul 7 02:15:09.465760 systemd[2973]: Created slice app.slice - User Application Slice. Jul 7 02:15:09.465785 systemd[2973]: Reached target paths.target - Paths. Jul 7 02:15:09.465818 systemd[2973]: Reached target timers.target - Timers. Jul 7 02:15:09.467016 systemd[2973]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 02:15:09.475385 systemd[2973]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 02:15:09.475437 systemd[2973]: Reached target sockets.target - Sockets. Jul 7 02:15:09.475479 systemd[2973]: Reached target basic.target - Basic System. Jul 7 02:15:09.475506 systemd[2973]: Reached target default.target - Main User Target. Jul 7 02:15:09.475528 systemd[2973]: Startup finished in 141ms. Jul 7 02:15:09.475771 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 02:15:09.477231 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 02:15:09.477385 systemd[1]: Startup finished in 5.276s (kernel) + 20.069s (initrd) + 9.998s (userspace) = 35.344s. Jul 7 02:15:09.486986 sshd[2961]: Accepted publickey for core from 139.178.68.195 port 45016 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:09.488283 sshd-session[2961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:09.491335 systemd-logind[2788]: New session 3 of user core. Jul 7 02:15:09.492621 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 02:15:09.852323 systemd[1]: Started sshd@1-147.28.151.230:22-139.178.68.195:40836.service - OpenSSH per-connection server daemon (139.178.68.195:40836). Jul 7 02:15:10.252555 sshd[2998]: Accepted publickey for core from 139.178.68.195 port 40836 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:10.253703 sshd-session[2998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:10.257050 systemd-logind[2788]: New session 4 of user core. Jul 7 02:15:10.267856 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 02:15:10.300821 login[2896]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:10.303917 systemd-logind[2788]: New session 2 of user core. Jul 7 02:15:10.315801 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 02:15:10.543327 sshd[3000]: Connection closed by 139.178.68.195 port 40836 Jul 7 02:15:10.543671 sshd-session[2998]: pam_unix(sshd:session): session closed for user core Jul 7 02:15:10.546409 systemd[1]: sshd@1-147.28.151.230:22-139.178.68.195:40836.service: Deactivated successfully. Jul 7 02:15:10.548766 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 02:15:10.549732 systemd-logind[2788]: Session 4 logged out. Waiting for processes to exit. Jul 7 02:15:10.550477 systemd-logind[2788]: Removed session 4. Jul 7 02:15:10.620270 systemd[1]: Started sshd@2-147.28.151.230:22-139.178.68.195:40842.service - OpenSSH per-connection server daemon (139.178.68.195:40842). Jul 7 02:15:11.026237 sshd[3017]: Accepted publickey for core from 139.178.68.195 port 40842 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:11.027375 sshd-session[3017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:11.030436 systemd-logind[2788]: New session 5 of user core. Jul 7 02:15:11.040854 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 02:15:11.318661 sshd[3019]: Connection closed by 139.178.68.195 port 40842 Jul 7 02:15:11.318948 sshd-session[3017]: pam_unix(sshd:session): session closed for user core Jul 7 02:15:11.321646 systemd[1]: sshd@2-147.28.151.230:22-139.178.68.195:40842.service: Deactivated successfully. Jul 7 02:15:11.323902 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 02:15:11.324449 systemd-logind[2788]: Session 5 logged out. Waiting for processes to exit. Jul 7 02:15:11.325291 systemd-logind[2788]: Removed session 5. Jul 7 02:15:11.397336 systemd[1]: Started sshd@3-147.28.151.230:22-139.178.68.195:40850.service - OpenSSH per-connection server daemon (139.178.68.195:40850). Jul 7 02:15:11.809779 sshd[3027]: Accepted publickey for core from 139.178.68.195 port 40850 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:11.810985 sshd-session[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:11.814160 systemd-logind[2788]: New session 6 of user core. Jul 7 02:15:11.824846 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 02:15:11.882065 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:12.107221 sshd[3029]: Connection closed by 139.178.68.195 port 40850 Jul 7 02:15:12.107479 sshd-session[3027]: pam_unix(sshd:session): session closed for user core Jul 7 02:15:12.110288 systemd[1]: sshd@3-147.28.151.230:22-139.178.68.195:40850.service: Deactivated successfully. Jul 7 02:15:12.112945 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 02:15:12.113518 systemd-logind[2788]: Session 6 logged out. Waiting for processes to exit. Jul 7 02:15:12.114340 systemd-logind[2788]: Removed session 6. Jul 7 02:15:12.183196 systemd[1]: Started sshd@4-147.28.151.230:22-139.178.68.195:40862.service - OpenSSH per-connection server daemon (139.178.68.195:40862). Jul 7 02:15:12.591368 sshd[3036]: Accepted publickey for core from 139.178.68.195 port 40862 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:12.592479 sshd-session[3036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:12.595480 systemd-logind[2788]: New session 7 of user core. Jul 7 02:15:12.618799 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 02:15:12.831343 sudo[3040]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 02:15:12.831608 sudo[3040]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:15:12.856219 sudo[3040]: pam_unix(sudo:session): session closed for user root Jul 7 02:15:12.919820 sshd[3039]: Connection closed by 139.178.68.195 port 40862 Jul 7 02:15:12.920191 sshd-session[3036]: pam_unix(sshd:session): session closed for user core Jul 7 02:15:12.923309 systemd[1]: sshd@4-147.28.151.230:22-139.178.68.195:40862.service: Deactivated successfully. Jul 7 02:15:12.924715 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 02:15:12.925287 systemd-logind[2788]: Session 7 logged out. Waiting for processes to exit. Jul 7 02:15:12.926285 systemd-logind[2788]: Removed session 7. Jul 7 02:15:12.999537 systemd[1]: Started sshd@5-147.28.151.230:22-139.178.68.195:40870.service - OpenSSH per-connection server daemon (139.178.68.195:40870). Jul 7 02:15:13.413238 sshd[3046]: Accepted publickey for core from 139.178.68.195 port 40870 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:13.414567 sshd-session[3046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:13.417865 systemd-logind[2788]: New session 8 of user core. Jul 7 02:15:13.440789 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 02:15:13.648123 sudo[3050]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 02:15:13.648374 sudo[3050]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:15:13.651430 sudo[3050]: pam_unix(sudo:session): session closed for user root Jul 7 02:15:13.655915 sudo[3049]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 02:15:13.656155 sudo[3049]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:15:13.663250 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 02:15:13.706720 augenrules[3072]: No rules Jul 7 02:15:13.707750 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 02:15:13.708808 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 02:15:13.709625 sudo[3049]: pam_unix(sudo:session): session closed for user root Jul 7 02:15:13.772938 sshd[3048]: Connection closed by 139.178.68.195 port 40870 Jul 7 02:15:13.773255 sshd-session[3046]: pam_unix(sshd:session): session closed for user core Jul 7 02:15:13.776520 systemd[1]: sshd@5-147.28.151.230:22-139.178.68.195:40870.service: Deactivated successfully. Jul 7 02:15:13.779990 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 02:15:13.780547 systemd-logind[2788]: Session 8 logged out. Waiting for processes to exit. Jul 7 02:15:13.781370 systemd-logind[2788]: Removed session 8. Jul 7 02:15:13.846252 systemd[1]: Started sshd@6-147.28.151.230:22-139.178.68.195:40882.service - OpenSSH per-connection server daemon (139.178.68.195:40882). Jul 7 02:15:14.247274 sshd[3082]: Accepted publickey for core from 139.178.68.195 port 40882 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:15:14.248573 sshd-session[3082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:15:14.251818 systemd-logind[2788]: New session 9 of user core. Jul 7 02:15:14.271850 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 02:15:14.477994 sudo[3085]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 02:15:14.478255 sudo[3085]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:15:14.779475 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 02:15:14.799984 (dockerd)[3110]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 02:15:15.003094 dockerd[3110]: time="2025-07-07T02:15:15.003040480Z" level=info msg="Starting up" Jul 7 02:15:15.004187 dockerd[3110]: time="2025-07-07T02:15:15.004162840Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 02:15:15.030904 dockerd[3110]: time="2025-07-07T02:15:15.030831680Z" level=info msg="Loading containers: start." Jul 7 02:15:15.042694 kernel: Initializing XFRM netlink socket Jul 7 02:15:15.212387 systemd-timesyncd[2715]: Network configuration changed, trying to establish connection. Jul 7 02:15:15.243478 systemd-networkd[2713]: docker0: Link UP Jul 7 02:15:15.244304 dockerd[3110]: time="2025-07-07T02:15:15.244274000Z" level=info msg="Loading containers: done." Jul 7 02:15:15.253274 dockerd[3110]: time="2025-07-07T02:15:15.253246720Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 02:15:15.253358 dockerd[3110]: time="2025-07-07T02:15:15.253308520Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 02:15:15.253411 dockerd[3110]: time="2025-07-07T02:15:15.253399120Z" level=info msg="Initializing buildkit" Jul 7 02:15:15.267807 dockerd[3110]: time="2025-07-07T02:15:15.267781600Z" level=info msg="Completed buildkit initialization" Jul 7 02:15:15.273466 dockerd[3110]: time="2025-07-07T02:15:15.273435320Z" level=info msg="Daemon has completed initialization" Jul 7 02:15:15.273526 dockerd[3110]: time="2025-07-07T02:15:15.273487360Z" level=info msg="API listen on /run/docker.sock" Jul 7 02:15:15.273618 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 02:15:15.385076 systemd-timesyncd[2715]: Contacted time server [2600:3c02:e001:1d00::123:0]:123 (2.flatcar.pool.ntp.org). Jul 7 02:15:15.385130 systemd-timesyncd[2715]: Initial clock synchronization to Mon 2025-07-07 02:15:15.229958 UTC. Jul 7 02:15:15.842854 containerd[2803]: time="2025-07-07T02:15:15.842805640Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 02:15:16.020545 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1472723306-merged.mount: Deactivated successfully. Jul 7 02:15:16.318386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518962562.mount: Deactivated successfully. Jul 7 02:15:17.427770 containerd[2803]: time="2025-07-07T02:15:17.427733604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:17.428059 containerd[2803]: time="2025-07-07T02:15:17.427789481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 7 02:15:17.428649 containerd[2803]: time="2025-07-07T02:15:17.428629600Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:17.431014 containerd[2803]: time="2025-07-07T02:15:17.430990546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:17.431977 containerd[2803]: time="2025-07-07T02:15:17.431949930Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.589098619s" Jul 7 02:15:17.432004 containerd[2803]: time="2025-07-07T02:15:17.431987365Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 02:15:17.432459 containerd[2803]: time="2025-07-07T02:15:17.432440278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 02:15:17.951166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 02:15:17.952621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:18.103066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:18.106296 (kubelet)[3448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:15:18.141762 kubelet[3448]: E0707 02:15:18.141723 3448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:15:18.144746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:15:18.144867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:15:18.145795 systemd[1]: kubelet.service: Consumed 148ms CPU time, 117.4M memory peak. Jul 7 02:15:18.612585 containerd[2803]: time="2025-07-07T02:15:18.612549085Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 7 02:15:18.612843 containerd[2803]: time="2025-07-07T02:15:18.612557124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:18.613490 containerd[2803]: time="2025-07-07T02:15:18.613468607Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:18.615829 containerd[2803]: time="2025-07-07T02:15:18.615812257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:18.616719 containerd[2803]: time="2025-07-07T02:15:18.616694500Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.184226404s" Jul 7 02:15:18.616746 containerd[2803]: time="2025-07-07T02:15:18.616726853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 02:15:18.617038 containerd[2803]: time="2025-07-07T02:15:18.617018070Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 02:15:19.509282 containerd[2803]: time="2025-07-07T02:15:19.509248143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:19.509282 containerd[2803]: time="2025-07-07T02:15:19.509274556Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 7 02:15:19.510115 containerd[2803]: time="2025-07-07T02:15:19.510092880Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:19.512323 containerd[2803]: time="2025-07-07T02:15:19.512301965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:19.513216 containerd[2803]: time="2025-07-07T02:15:19.513186618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 896.139175ms" Jul 7 02:15:19.513246 containerd[2803]: time="2025-07-07T02:15:19.513219979Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 02:15:19.513602 containerd[2803]: time="2025-07-07T02:15:19.513582534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 02:15:20.353500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170850756.mount: Deactivated successfully. Jul 7 02:15:20.538678 containerd[2803]: time="2025-07-07T02:15:20.538639498Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 7 02:15:20.538678 containerd[2803]: time="2025-07-07T02:15:20.538658322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:20.539485 containerd[2803]: time="2025-07-07T02:15:20.539457032Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:20.540795 containerd[2803]: time="2025-07-07T02:15:20.540771895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:20.541275 containerd[2803]: time="2025-07-07T02:15:20.541254672Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.027642315s" Jul 7 02:15:20.541295 containerd[2803]: time="2025-07-07T02:15:20.541281326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 02:15:20.541688 containerd[2803]: time="2025-07-07T02:15:20.541624110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 02:15:20.917671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835547391.mount: Deactivated successfully. Jul 7 02:15:21.570209 containerd[2803]: time="2025-07-07T02:15:21.570145356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:21.570209 containerd[2803]: time="2025-07-07T02:15:21.570192167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 02:15:21.571147 containerd[2803]: time="2025-07-07T02:15:21.571128375Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:21.573666 containerd[2803]: time="2025-07-07T02:15:21.573643107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:21.574675 containerd[2803]: time="2025-07-07T02:15:21.574647194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.032994401s" Jul 7 02:15:21.574714 containerd[2803]: time="2025-07-07T02:15:21.574688262Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 02:15:21.575022 containerd[2803]: time="2025-07-07T02:15:21.574999024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 02:15:21.999140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158311386.mount: Deactivated successfully. Jul 7 02:15:21.999390 containerd[2803]: time="2025-07-07T02:15:21.999368566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:15:21.999559 containerd[2803]: time="2025-07-07T02:15:21.999537590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 02:15:22.000005 containerd[2803]: time="2025-07-07T02:15:21.999983041Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:15:22.001503 containerd[2803]: time="2025-07-07T02:15:22.001486694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:15:22.002225 containerd[2803]: time="2025-07-07T02:15:22.002201904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 427.170079ms" Jul 7 02:15:22.002257 containerd[2803]: time="2025-07-07T02:15:22.002229859Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 02:15:22.002539 containerd[2803]: time="2025-07-07T02:15:22.002525745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 02:15:22.372774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897899099.mount: Deactivated successfully. Jul 7 02:15:24.603744 containerd[2803]: time="2025-07-07T02:15:24.603698445Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 7 02:15:24.604071 containerd[2803]: time="2025-07-07T02:15:24.603901127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:24.605458 containerd[2803]: time="2025-07-07T02:15:24.605397703Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:24.608140 containerd[2803]: time="2025-07-07T02:15:24.608114425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:24.609313 containerd[2803]: time="2025-07-07T02:15:24.609262415Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.606713136s" Jul 7 02:15:24.609313 containerd[2803]: time="2025-07-07T02:15:24.609296268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 02:15:28.146919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 02:15:28.148430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:28.151715 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 02:15:28.151776 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 02:15:28.151977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:28.155350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:28.171589 systemd[1]: Reload requested from client PID 3676 ('systemctl') (unit session-9.scope)... Jul 7 02:15:28.171600 systemd[1]: Reloading... Jul 7 02:15:28.232689 zram_generator::config[3723]: No configuration found. Jul 7 02:15:28.308843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:15:28.410745 systemd[1]: Reloading finished in 238 ms. Jul 7 02:15:28.456515 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 02:15:28.456715 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 02:15:28.457037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:28.459294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:28.599776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:28.603246 (kubelet)[3785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 02:15:28.641303 kubelet[3785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:15:28.641303 kubelet[3785]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 02:15:28.641303 kubelet[3785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:15:28.641557 kubelet[3785]: I0707 02:15:28.641364 3785 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 02:15:29.364852 kubelet[3785]: I0707 02:15:29.364823 3785 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 02:15:29.364852 kubelet[3785]: I0707 02:15:29.364849 3785 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 02:15:29.365124 kubelet[3785]: I0707 02:15:29.365111 3785 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 02:15:29.385509 kubelet[3785]: E0707 02:15:29.385484 3785 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.151.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:15:29.387180 kubelet[3785]: I0707 02:15:29.387153 3785 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:15:29.391540 kubelet[3785]: I0707 02:15:29.391512 3785 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 02:15:29.411454 kubelet[3785]: I0707 02:15:29.411426 3785 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 02:15:29.412053 kubelet[3785]: I0707 02:15:29.412016 3785 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 02:15:29.412203 kubelet[3785]: I0707 02:15:29.412054 3785 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-e89e5d604b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 02:15:29.412284 kubelet[3785]: I0707 02:15:29.412276 3785 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 02:15:29.412309 kubelet[3785]: I0707 02:15:29.412286 3785 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 02:15:29.412511 kubelet[3785]: I0707 02:15:29.412499 3785 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:15:29.415209 kubelet[3785]: I0707 02:15:29.415193 3785 kubelet.go:446] "Attempting to sync node with API server" Jul 7 02:15:29.415234 kubelet[3785]: I0707 02:15:29.415217 3785 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 02:15:29.415256 kubelet[3785]: I0707 02:15:29.415241 3785 kubelet.go:352] "Adding apiserver pod source" Jul 7 02:15:29.415256 kubelet[3785]: I0707 02:15:29.415255 3785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 02:15:29.418313 kubelet[3785]: I0707 02:15:29.418282 3785 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 02:15:29.418865 kubelet[3785]: W0707 02:15:29.418817 3785 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused Jul 7 02:15:29.418944 kubelet[3785]: E0707 02:15:29.418918 3785 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:15:29.419245 kubelet[3785]: W0707 02:15:29.419209 3785 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-e89e5d604b&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused Jul 7 02:15:29.419291 kubelet[3785]: E0707 02:15:29.419257 3785 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-e89e5d604b&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:15:29.419615 kubelet[3785]: I0707 02:15:29.419603 3785 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 02:15:29.419756 kubelet[3785]: W0707 02:15:29.419742 3785 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 02:15:29.421855 kubelet[3785]: I0707 02:15:29.421839 3785 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 02:15:29.421880 kubelet[3785]: I0707 02:15:29.421873 3785 server.go:1287] "Started kubelet" Jul 7 02:15:29.422189 kubelet[3785]: I0707 02:15:29.422151 3785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 02:15:29.422211 kubelet[3785]: I0707 02:15:29.422150 3785 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 02:15:29.422436 kubelet[3785]: I0707 02:15:29.422423 3785 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 02:15:29.423574 kubelet[3785]: I0707 02:15:29.423559 3785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 02:15:29.423614 kubelet[3785]: I0707 02:15:29.423562 3785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 02:15:29.423710 kubelet[3785]: I0707 02:15:29.423679 3785 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 02:15:29.423733 kubelet[3785]: I0707 02:15:29.423699 3785 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 02:15:29.423792 kubelet[3785]: E0707 02:15:29.423735 3785 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" Jul 7 02:15:29.423814 kubelet[3785]: I0707 02:15:29.423797 3785 reconciler.go:26] "Reconciler: start to sync state" Jul 7 02:15:29.424071 kubelet[3785]: W0707 02:15:29.424031 3785 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused Jul 7 02:15:29.424139 kubelet[3785]: E0707 02:15:29.424083 3785 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:15:29.424139 kubelet[3785]: I0707 02:15:29.424098 3785 server.go:479] "Adding debug handlers to kubelet server" Jul 7 02:15:29.424299 kubelet[3785]: I0707 02:15:29.424281 3785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 02:15:29.424388 kubelet[3785]: E0707 02:15:29.424375 3785 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 02:15:29.424880 kubelet[3785]: E0707 02:15:29.424856 3785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-e89e5d604b?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="200ms" Jul 7 02:15:29.424968 kubelet[3785]: I0707 02:15:29.424955 3785 factory.go:221] Registration of the containerd container factory successfully Jul 7 02:15:29.424968 kubelet[3785]: I0707 02:15:29.424969 3785 factory.go:221] Registration of the systemd container factory successfully Jul 7 02:15:29.425605 kubelet[3785]: E0707 02:15:29.425388 3785 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.151.230:6443/api/v1/namespaces/default/events\": dial tcp 147.28.151.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-e89e5d604b.184fd669ac5949b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-e89e5d604b,UID:ci-4372.0.1-a-e89e5d604b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-e89e5d604b,},FirstTimestamp:2025-07-07 02:15:29.421855152 +0000 UTC m=+0.815738664,LastTimestamp:2025-07-07 02:15:29.421855152 +0000 UTC m=+0.815738664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-e89e5d604b,}" Jul 7 02:15:29.438603 kubelet[3785]: I0707 02:15:29.438560 3785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 02:15:29.439630 kubelet[3785]: I0707 02:15:29.439617 3785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 02:15:29.439655 kubelet[3785]: I0707 02:15:29.439635 3785 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 02:15:29.439655 kubelet[3785]: I0707 02:15:29.439651 3785 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 02:15:29.439711 kubelet[3785]: I0707 02:15:29.439657 3785 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 02:15:29.439711 kubelet[3785]: I0707 02:15:29.439668 3785 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 02:15:29.439711 kubelet[3785]: I0707 02:15:29.439691 3785 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 02:15:29.439711 kubelet[3785]: I0707 02:15:29.439708 3785 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:15:29.439711 kubelet[3785]: E0707 02:15:29.439703 3785 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 02:15:29.440404 kubelet[3785]: I0707 02:15:29.440390 3785 policy_none.go:49] "None policy: Start" Jul 7 02:15:29.440426 kubelet[3785]: I0707 02:15:29.440409 3785 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 02:15:29.440426 kubelet[3785]: I0707 02:15:29.440421 3785 state_mem.go:35] "Initializing new in-memory state store" Jul 7 02:15:29.441298 kubelet[3785]: W0707 02:15:29.441257 3785 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused Jul 7 02:15:29.441331 kubelet[3785]: E0707 02:15:29.441313 3785 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:15:29.444516 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 02:15:29.473158 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 02:15:29.475787 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 02:15:29.487373 kubelet[3785]: I0707 02:15:29.487350 3785 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 02:15:29.487559 kubelet[3785]: I0707 02:15:29.487540 3785 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 02:15:29.487608 kubelet[3785]: I0707 02:15:29.487554 3785 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 02:15:29.487736 kubelet[3785]: I0707 02:15:29.487719 3785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 02:15:29.488232 kubelet[3785]: E0707 02:15:29.488216 3785 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 02:15:29.488271 kubelet[3785]: E0707 02:15:29.488262 3785 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-e89e5d604b\" not found" Jul 7 02:15:29.546794 systemd[1]: Created slice kubepods-burstable-podaef76188af9e54547e084dd91bb81144.slice - libcontainer container kubepods-burstable-podaef76188af9e54547e084dd91bb81144.slice. Jul 7 02:15:29.577937 kubelet[3785]: E0707 02:15:29.577903 3785 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.580322 systemd[1]: Created slice kubepods-burstable-pod243c4fd440822928c1ce9312873c08b5.slice - libcontainer container kubepods-burstable-pod243c4fd440822928c1ce9312873c08b5.slice. Jul 7 02:15:29.589316 kubelet[3785]: I0707 02:15:29.589293 3785 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.589685 kubelet[3785]: E0707 02:15:29.589659 3785 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.594592 kubelet[3785]: E0707 02:15:29.594571 3785 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.596869 systemd[1]: Created slice kubepods-burstable-pod4f1eb40d73791ea553aa1a84fefda8bb.slice - libcontainer container kubepods-burstable-pod4f1eb40d73791ea553aa1a84fefda8bb.slice. Jul 7 02:15:29.598159 kubelet[3785]: E0707 02:15:29.598140 3785 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624264 kubelet[3785]: I0707 02:15:29.624201 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef76188af9e54547e084dd91bb81144-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" (UID: \"aef76188af9e54547e084dd91bb81144\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624264 kubelet[3785]: I0707 02:15:29.624232 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624264 kubelet[3785]: I0707 02:15:29.624250 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624335 kubelet[3785]: I0707 02:15:29.624265 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1eb40d73791ea553aa1a84fefda8bb-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-e89e5d604b\" (UID: \"4f1eb40d73791ea553aa1a84fefda8bb\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624335 kubelet[3785]: I0707 02:15:29.624286 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef76188af9e54547e084dd91bb81144-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" (UID: \"aef76188af9e54547e084dd91bb81144\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624335 kubelet[3785]: I0707 02:15:29.624303 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef76188af9e54547e084dd91bb81144-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" (UID: \"aef76188af9e54547e084dd91bb81144\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624335 kubelet[3785]: I0707 02:15:29.624319 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624335 kubelet[3785]: I0707 02:15:29.624334 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.624434 kubelet[3785]: I0707 02:15:29.624349 3785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.625285 kubelet[3785]: E0707 02:15:29.625260 3785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-e89e5d604b?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="400ms" Jul 7 02:15:29.791867 kubelet[3785]: I0707 02:15:29.791848 3785 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.792131 kubelet[3785]: E0707 02:15:29.792108 3785 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:29.878966 containerd[2803]: time="2025-07-07T02:15:29.878914887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-e89e5d604b,Uid:aef76188af9e54547e084dd91bb81144,Namespace:kube-system,Attempt:0,}" Jul 7 02:15:29.888932 containerd[2803]: time="2025-07-07T02:15:29.888897660Z" level=info msg="connecting to shim ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee" address="unix:///run/containerd/s/c73025b2d46f676d2247e8fc0354ebd7a90334ec2d4ac8d332944b19fb7b5281" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:29.895505 containerd[2803]: time="2025-07-07T02:15:29.895480637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-e89e5d604b,Uid:243c4fd440822928c1ce9312873c08b5,Namespace:kube-system,Attempt:0,}" Jul 7 02:15:29.899029 containerd[2803]: time="2025-07-07T02:15:29.899005565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-e89e5d604b,Uid:4f1eb40d73791ea553aa1a84fefda8bb,Namespace:kube-system,Attempt:0,}" Jul 7 02:15:29.903751 containerd[2803]: time="2025-07-07T02:15:29.903725641Z" level=info msg="connecting to shim 4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342" address="unix:///run/containerd/s/e3a3b30584f24e07e580079fdc1ed1c15278b870ca22c72f7171d8dbe3fd1cc7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:29.906704 containerd[2803]: time="2025-07-07T02:15:29.906677692Z" level=info msg="connecting to shim f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a" address="unix:///run/containerd/s/fc411952c867f74fcf4c60f80a806a10d2d827d64e7ec73e6770721332753d8c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:29.925904 systemd[1]: Started cri-containerd-ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee.scope - libcontainer container ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee. Jul 7 02:15:29.932396 systemd[1]: Started cri-containerd-4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342.scope - libcontainer container 4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342. Jul 7 02:15:29.933669 systemd[1]: Started cri-containerd-f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a.scope - libcontainer container f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a. Jul 7 02:15:29.952984 containerd[2803]: time="2025-07-07T02:15:29.952955092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-e89e5d604b,Uid:aef76188af9e54547e084dd91bb81144,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee\"" Jul 7 02:15:29.955091 containerd[2803]: time="2025-07-07T02:15:29.955069212Z" level=info msg="CreateContainer within sandbox \"ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 02:15:29.957071 containerd[2803]: time="2025-07-07T02:15:29.957048155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-e89e5d604b,Uid:243c4fd440822928c1ce9312873c08b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342\"" Jul 7 02:15:29.959968 containerd[2803]: time="2025-07-07T02:15:29.959945992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-e89e5d604b,Uid:4f1eb40d73791ea553aa1a84fefda8bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a\"" Jul 7 02:15:29.960094 containerd[2803]: time="2025-07-07T02:15:29.960071083Z" level=info msg="CreateContainer within sandbox \"4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 02:15:29.961136 containerd[2803]: time="2025-07-07T02:15:29.961114351Z" level=info msg="Container 510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:29.961451 containerd[2803]: time="2025-07-07T02:15:29.961433498Z" level=info msg="CreateContainer within sandbox \"f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 02:15:29.963579 containerd[2803]: time="2025-07-07T02:15:29.963557982Z" level=info msg="Container 3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:29.964694 containerd[2803]: time="2025-07-07T02:15:29.964659610Z" level=info msg="CreateContainer within sandbox \"ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb\"" Jul 7 02:15:29.965082 containerd[2803]: time="2025-07-07T02:15:29.965059480Z" level=info msg="StartContainer for \"510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb\"" Jul 7 02:15:29.965235 containerd[2803]: time="2025-07-07T02:15:29.965216423Z" level=info msg="Container 1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:29.966040 containerd[2803]: time="2025-07-07T02:15:29.966018994Z" level=info msg="connecting to shim 510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb" address="unix:///run/containerd/s/c73025b2d46f676d2247e8fc0354ebd7a90334ec2d4ac8d332944b19fb7b5281" protocol=ttrpc version=3 Jul 7 02:15:29.966412 containerd[2803]: time="2025-07-07T02:15:29.966386097Z" level=info msg="CreateContainer within sandbox \"4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208\"" Jul 7 02:15:29.966710 containerd[2803]: time="2025-07-07T02:15:29.966666656Z" level=info msg="StartContainer for \"3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208\"" Jul 7 02:15:29.967593 containerd[2803]: time="2025-07-07T02:15:29.967572674Z" level=info msg="connecting to shim 3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208" address="unix:///run/containerd/s/e3a3b30584f24e07e580079fdc1ed1c15278b870ca22c72f7171d8dbe3fd1cc7" protocol=ttrpc version=3 Jul 7 02:15:29.968000 containerd[2803]: time="2025-07-07T02:15:29.967981354Z" level=info msg="CreateContainer within sandbox \"f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482\"" Jul 7 02:15:29.968217 containerd[2803]: time="2025-07-07T02:15:29.968199567Z" level=info msg="StartContainer for \"1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482\"" Jul 7 02:15:29.969103 containerd[2803]: time="2025-07-07T02:15:29.969081387Z" level=info msg="connecting to shim 1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482" address="unix:///run/containerd/s/fc411952c867f74fcf4c60f80a806a10d2d827d64e7ec73e6770721332753d8c" protocol=ttrpc version=3 Jul 7 02:15:29.999856 systemd[1]: Started cri-containerd-510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb.scope - libcontainer container 510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb. Jul 7 02:15:30.002925 systemd[1]: Started cri-containerd-1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482.scope - libcontainer container 1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482. Jul 7 02:15:30.003983 systemd[1]: Started cri-containerd-3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208.scope - libcontainer container 3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208. Jul 7 02:15:30.025981 kubelet[3785]: E0707 02:15:30.025950 3785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-e89e5d604b?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="800ms" Jul 7 02:15:30.029035 containerd[2803]: time="2025-07-07T02:15:30.029007010Z" level=info msg="StartContainer for \"510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb\" returns successfully" Jul 7 02:15:30.029280 containerd[2803]: time="2025-07-07T02:15:30.029258337Z" level=info msg="StartContainer for \"1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482\" returns successfully" Jul 7 02:15:30.033612 containerd[2803]: time="2025-07-07T02:15:30.033593672Z" level=info msg="StartContainer for \"3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208\" returns successfully" Jul 7 02:15:30.193930 kubelet[3785]: I0707 02:15:30.193863 3785 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:30.444843 kubelet[3785]: E0707 02:15:30.444802 3785 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:30.445848 kubelet[3785]: E0707 02:15:30.445826 3785 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:30.446931 kubelet[3785]: E0707 02:15:30.446914 3785 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.108046 kubelet[3785]: E0707 02:15:31.107997 3785 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-a-e89e5d604b\" not found" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.227778 kubelet[3785]: I0707 02:15:31.227554 3785 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.324020 kubelet[3785]: I0707 02:15:31.323757 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.328776 kubelet[3785]: E0707 02:15:31.328741 3785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.329054 kubelet[3785]: I0707 02:15:31.328883 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.330340 kubelet[3785]: E0707 02:15:31.330315 3785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.330433 kubelet[3785]: I0707 02:15:31.330422 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.331900 kubelet[3785]: E0707 02:15:31.331879 3785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-e89e5d604b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.417774 kubelet[3785]: I0707 02:15:31.417712 3785 apiserver.go:52] "Watching apiserver" Jul 7 02:15:31.424818 kubelet[3785]: I0707 02:15:31.424797 3785 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 02:15:31.446419 kubelet[3785]: I0707 02:15:31.446400 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.446506 kubelet[3785]: I0707 02:15:31.446489 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.447992 kubelet[3785]: E0707 02:15:31.447962 3785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-e89e5d604b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:31.448058 kubelet[3785]: E0707 02:15:31.447966 3785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:32.447704 kubelet[3785]: I0707 02:15:32.447488 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:32.447704 kubelet[3785]: I0707 02:15:32.447598 3785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:32.450699 kubelet[3785]: W0707 02:15:32.450667 3785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:32.451126 kubelet[3785]: W0707 02:15:32.450928 3785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:33.033926 systemd[1]: Reload requested from client PID 4204 ('systemctl') (unit session-9.scope)... Jul 7 02:15:33.033941 systemd[1]: Reloading... Jul 7 02:15:33.109699 zram_generator::config[4252]: No configuration found. Jul 7 02:15:33.184973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:15:33.297577 systemd[1]: Reloading finished in 263 ms. Jul 7 02:15:33.328974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:33.345510 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 02:15:33.346768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:33.346844 systemd[1]: kubelet.service: Consumed 1.260s CPU time, 146.3M memory peak. Jul 7 02:15:33.348649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:15:33.479179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:15:33.482585 (kubelet)[4310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 02:15:33.512169 kubelet[4310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:15:33.512169 kubelet[4310]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 02:15:33.512169 kubelet[4310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:15:33.512412 kubelet[4310]: I0707 02:15:33.512227 4310 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 02:15:33.517193 kubelet[4310]: I0707 02:15:33.517173 4310 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 02:15:33.517226 kubelet[4310]: I0707 02:15:33.517194 4310 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 02:15:33.517425 kubelet[4310]: I0707 02:15:33.517415 4310 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 02:15:33.518521 kubelet[4310]: I0707 02:15:33.518506 4310 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 02:15:33.520532 kubelet[4310]: I0707 02:15:33.520511 4310 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:15:33.524438 kubelet[4310]: I0707 02:15:33.524423 4310 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 02:15:33.544526 kubelet[4310]: I0707 02:15:33.544508 4310 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 02:15:33.544719 kubelet[4310]: I0707 02:15:33.544695 4310 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 02:15:33.544872 kubelet[4310]: I0707 02:15:33.544720 4310 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-e89e5d604b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 02:15:33.544939 kubelet[4310]: I0707 02:15:33.544879 4310 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 02:15:33.544939 kubelet[4310]: I0707 02:15:33.544888 4310 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 02:15:33.544979 kubelet[4310]: I0707 02:15:33.544945 4310 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:15:33.545223 kubelet[4310]: I0707 02:15:33.545214 4310 kubelet.go:446] "Attempting to sync node with API server" Jul 7 02:15:33.545250 kubelet[4310]: I0707 02:15:33.545232 4310 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 02:15:33.545271 kubelet[4310]: I0707 02:15:33.545251 4310 kubelet.go:352] "Adding apiserver pod source" Jul 7 02:15:33.545271 kubelet[4310]: I0707 02:15:33.545261 4310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 02:15:33.545790 kubelet[4310]: I0707 02:15:33.545774 4310 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 02:15:33.546222 kubelet[4310]: I0707 02:15:33.546212 4310 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 02:15:33.546614 kubelet[4310]: I0707 02:15:33.546604 4310 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 02:15:33.546639 kubelet[4310]: I0707 02:15:33.546633 4310 server.go:1287] "Started kubelet" Jul 7 02:15:33.546717 kubelet[4310]: I0707 02:15:33.546666 4310 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 02:15:33.546749 kubelet[4310]: I0707 02:15:33.546701 4310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 02:15:33.546906 kubelet[4310]: I0707 02:15:33.546896 4310 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 02:15:33.547621 kubelet[4310]: I0707 02:15:33.547607 4310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 02:15:33.547714 kubelet[4310]: I0707 02:15:33.547660 4310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 02:15:33.547714 kubelet[4310]: I0707 02:15:33.547689 4310 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 02:15:33.547714 kubelet[4310]: E0707 02:15:33.547692 4310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-e89e5d604b\" not found" Jul 7 02:15:33.547785 kubelet[4310]: I0707 02:15:33.547720 4310 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 02:15:33.547862 kubelet[4310]: I0707 02:15:33.547844 4310 reconciler.go:26] "Reconciler: start to sync state" Jul 7 02:15:33.548702 kubelet[4310]: I0707 02:15:33.548674 4310 factory.go:221] Registration of the systemd container factory successfully Jul 7 02:15:33.548814 kubelet[4310]: I0707 02:15:33.548797 4310 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 02:15:33.548933 kubelet[4310]: E0707 02:15:33.548915 4310 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 02:15:33.550350 kubelet[4310]: I0707 02:15:33.550332 4310 factory.go:221] Registration of the containerd container factory successfully Jul 7 02:15:33.550420 kubelet[4310]: I0707 02:15:33.550406 4310 server.go:479] "Adding debug handlers to kubelet server" Jul 7 02:15:33.555113 kubelet[4310]: I0707 02:15:33.555086 4310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 02:15:33.556115 kubelet[4310]: I0707 02:15:33.556101 4310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 02:15:33.556141 kubelet[4310]: I0707 02:15:33.556121 4310 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 02:15:33.556141 kubelet[4310]: I0707 02:15:33.556138 4310 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 02:15:33.556181 kubelet[4310]: I0707 02:15:33.556144 4310 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 02:15:33.556483 kubelet[4310]: E0707 02:15:33.556184 4310 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 02:15:33.578181 kubelet[4310]: I0707 02:15:33.578165 4310 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 02:15:33.578211 kubelet[4310]: I0707 02:15:33.578181 4310 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 02:15:33.578211 kubelet[4310]: I0707 02:15:33.578199 4310 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:15:33.578365 kubelet[4310]: I0707 02:15:33.578350 4310 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 02:15:33.578395 kubelet[4310]: I0707 02:15:33.578362 4310 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 02:15:33.578395 kubelet[4310]: I0707 02:15:33.578380 4310 policy_none.go:49] "None policy: Start" Jul 7 02:15:33.578395 kubelet[4310]: I0707 02:15:33.578388 4310 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 02:15:33.578395 kubelet[4310]: I0707 02:15:33.578397 4310 state_mem.go:35] "Initializing new in-memory state store" Jul 7 02:15:33.578509 kubelet[4310]: I0707 02:15:33.578481 4310 state_mem.go:75] "Updated machine memory state" Jul 7 02:15:33.582987 kubelet[4310]: I0707 02:15:33.582966 4310 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 02:15:33.583166 kubelet[4310]: I0707 02:15:33.583154 4310 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 02:15:33.583193 kubelet[4310]: I0707 02:15:33.583166 4310 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 02:15:33.583312 kubelet[4310]: I0707 02:15:33.583300 4310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 02:15:33.583886 kubelet[4310]: E0707 02:15:33.583867 4310 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 02:15:33.657846 kubelet[4310]: I0707 02:15:33.657826 4310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.657895 kubelet[4310]: I0707 02:15:33.657849 4310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.657915 kubelet[4310]: I0707 02:15:33.657902 4310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.660580 kubelet[4310]: W0707 02:15:33.660560 4310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:33.660622 kubelet[4310]: W0707 02:15:33.660582 4310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:33.660666 kubelet[4310]: E0707 02:15:33.660628 4310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-e89e5d604b\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.660935 kubelet[4310]: W0707 02:15:33.660924 4310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:33.660970 kubelet[4310]: E0707 02:15:33.660959 4310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.686541 kubelet[4310]: I0707 02:15:33.686529 4310 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.691017 kubelet[4310]: I0707 02:15:33.690993 4310 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.691073 kubelet[4310]: I0707 02:15:33.691061 4310 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748454 kubelet[4310]: I0707 02:15:33.748415 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1eb40d73791ea553aa1a84fefda8bb-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-e89e5d604b\" (UID: \"4f1eb40d73791ea553aa1a84fefda8bb\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748454 kubelet[4310]: I0707 02:15:33.748450 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef76188af9e54547e084dd91bb81144-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" (UID: \"aef76188af9e54547e084dd91bb81144\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748657 kubelet[4310]: I0707 02:15:33.748471 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef76188af9e54547e084dd91bb81144-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" (UID: \"aef76188af9e54547e084dd91bb81144\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748657 kubelet[4310]: I0707 02:15:33.748491 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748657 kubelet[4310]: I0707 02:15:33.748508 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748657 kubelet[4310]: I0707 02:15:33.748523 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748657 kubelet[4310]: I0707 02:15:33.748581 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748786 kubelet[4310]: I0707 02:15:33.748632 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/243c4fd440822928c1ce9312873c08b5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" (UID: \"243c4fd440822928c1ce9312873c08b5\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:33.748786 kubelet[4310]: I0707 02:15:33.748658 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef76188af9e54547e084dd91bb81144-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-e89e5d604b\" (UID: \"aef76188af9e54547e084dd91bb81144\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:34.546240 kubelet[4310]: I0707 02:15:34.546215 4310 apiserver.go:52] "Watching apiserver" Jul 7 02:15:34.548304 kubelet[4310]: I0707 02:15:34.548282 4310 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 02:15:34.564640 kubelet[4310]: I0707 02:15:34.564599 4310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:34.564746 kubelet[4310]: I0707 02:15:34.564724 4310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:34.567602 kubelet[4310]: W0707 02:15:34.567579 4310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:34.567653 kubelet[4310]: W0707 02:15:34.567611 4310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:15:34.567653 kubelet[4310]: E0707 02:15:34.567637 4310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-e89e5d604b\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:34.567737 kubelet[4310]: E0707 02:15:34.567640 4310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-e89e5d604b\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" Jul 7 02:15:34.578727 kubelet[4310]: I0707 02:15:34.578677 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-e89e5d604b" podStartSLOduration=1.578664651 podStartE2EDuration="1.578664651s" podCreationTimestamp="2025-07-07 02:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:15:34.578618891 +0000 UTC m=+1.093264106" watchObservedRunningTime="2025-07-07 02:15:34.578664651 +0000 UTC m=+1.093309786" Jul 7 02:15:34.588468 kubelet[4310]: I0707 02:15:34.588432 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-e89e5d604b" podStartSLOduration=2.588419865 podStartE2EDuration="2.588419865s" podCreationTimestamp="2025-07-07 02:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:15:34.583900909 +0000 UTC m=+1.098546044" watchObservedRunningTime="2025-07-07 02:15:34.588419865 +0000 UTC m=+1.103065000" Jul 7 02:15:34.588548 kubelet[4310]: I0707 02:15:34.588531 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-e89e5d604b" podStartSLOduration=2.588528314 podStartE2EDuration="2.588528314s" podCreationTimestamp="2025-07-07 02:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:15:34.588488105 +0000 UTC m=+1.103133240" watchObservedRunningTime="2025-07-07 02:15:34.588528314 +0000 UTC m=+1.103173449" Jul 7 02:15:40.652926 kubelet[4310]: I0707 02:15:40.652883 4310 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 02:15:40.653352 kubelet[4310]: I0707 02:15:40.653317 4310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 02:15:40.653380 containerd[2803]: time="2025-07-07T02:15:40.653160093Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 02:15:40.696173 systemd[1]: Created slice kubepods-besteffort-pod36994fa8_4f11_4a40_8d5d_0a143d71116a.slice - libcontainer container kubepods-besteffort-pod36994fa8_4f11_4a40_8d5d_0a143d71116a.slice. Jul 7 02:15:40.794542 kubelet[4310]: I0707 02:15:40.794510 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36994fa8-4f11-4a40-8d5d-0a143d71116a-kube-proxy\") pod \"kube-proxy-rwmbb\" (UID: \"36994fa8-4f11-4a40-8d5d-0a143d71116a\") " pod="kube-system/kube-proxy-rwmbb" Jul 7 02:15:40.794542 kubelet[4310]: I0707 02:15:40.794541 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36994fa8-4f11-4a40-8d5d-0a143d71116a-xtables-lock\") pod \"kube-proxy-rwmbb\" (UID: \"36994fa8-4f11-4a40-8d5d-0a143d71116a\") " pod="kube-system/kube-proxy-rwmbb" Jul 7 02:15:40.794650 kubelet[4310]: I0707 02:15:40.794556 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36994fa8-4f11-4a40-8d5d-0a143d71116a-lib-modules\") pod \"kube-proxy-rwmbb\" (UID: \"36994fa8-4f11-4a40-8d5d-0a143d71116a\") " pod="kube-system/kube-proxy-rwmbb" Jul 7 02:15:40.794650 kubelet[4310]: I0707 02:15:40.794574 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grdm\" (UniqueName: \"kubernetes.io/projected/36994fa8-4f11-4a40-8d5d-0a143d71116a-kube-api-access-9grdm\") pod \"kube-proxy-rwmbb\" (UID: \"36994fa8-4f11-4a40-8d5d-0a143d71116a\") " pod="kube-system/kube-proxy-rwmbb" Jul 7 02:15:40.901744 kubelet[4310]: E0707 02:15:40.901720 4310 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 02:15:40.901744 kubelet[4310]: E0707 02:15:40.901745 4310 projected.go:194] Error preparing data for projected volume kube-api-access-9grdm for pod kube-system/kube-proxy-rwmbb: configmap "kube-root-ca.crt" not found Jul 7 02:15:40.901835 kubelet[4310]: E0707 02:15:40.901791 4310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36994fa8-4f11-4a40-8d5d-0a143d71116a-kube-api-access-9grdm podName:36994fa8-4f11-4a40-8d5d-0a143d71116a nodeName:}" failed. No retries permitted until 2025-07-07 02:15:41.401773466 +0000 UTC m=+7.916418600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9grdm" (UniqueName: "kubernetes.io/projected/36994fa8-4f11-4a40-8d5d-0a143d71116a-kube-api-access-9grdm") pod "kube-proxy-rwmbb" (UID: "36994fa8-4f11-4a40-8d5d-0a143d71116a") : configmap "kube-root-ca.crt" not found Jul 7 02:15:41.620467 containerd[2803]: time="2025-07-07T02:15:41.620426029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwmbb,Uid:36994fa8-4f11-4a40-8d5d-0a143d71116a,Namespace:kube-system,Attempt:0,}" Jul 7 02:15:41.628488 containerd[2803]: time="2025-07-07T02:15:41.628458584Z" level=info msg="connecting to shim 0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6" address="unix:///run/containerd/s/59fd6fc5334e1fa52933a93acaa5f8fecee68aea705f16a890b1724e71524e79" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:41.662883 systemd[1]: Started cri-containerd-0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6.scope - libcontainer container 0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6. Jul 7 02:15:41.679934 containerd[2803]: time="2025-07-07T02:15:41.679907553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwmbb,Uid:36994fa8-4f11-4a40-8d5d-0a143d71116a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6\"" Jul 7 02:15:41.681655 containerd[2803]: time="2025-07-07T02:15:41.681634885Z" level=info msg="CreateContainer within sandbox \"0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 02:15:41.687529 containerd[2803]: time="2025-07-07T02:15:41.687505167Z" level=info msg="Container 94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:41.691532 containerd[2803]: time="2025-07-07T02:15:41.691507774Z" level=info msg="CreateContainer within sandbox \"0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74\"" Jul 7 02:15:41.691949 containerd[2803]: time="2025-07-07T02:15:41.691925126Z" level=info msg="StartContainer for \"94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74\"" Jul 7 02:15:41.693212 containerd[2803]: time="2025-07-07T02:15:41.693189417Z" level=info msg="connecting to shim 94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74" address="unix:///run/containerd/s/59fd6fc5334e1fa52933a93acaa5f8fecee68aea705f16a890b1724e71524e79" protocol=ttrpc version=3 Jul 7 02:15:41.716864 systemd[1]: Started cri-containerd-94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74.scope - libcontainer container 94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74. Jul 7 02:15:41.756028 systemd[1]: Created slice kubepods-besteffort-pod9582b335_70b2_40bf_b05b_09207e7f5129.slice - libcontainer container kubepods-besteffort-pod9582b335_70b2_40bf_b05b_09207e7f5129.slice. Jul 7 02:15:41.758649 containerd[2803]: time="2025-07-07T02:15:41.758621047Z" level=info msg="StartContainer for \"94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74\" returns successfully" Jul 7 02:15:41.801124 kubelet[4310]: I0707 02:15:41.801079 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9582b335-70b2-40bf-b05b-09207e7f5129-var-lib-calico\") pod \"tigera-operator-747864d56d-8tm85\" (UID: \"9582b335-70b2-40bf-b05b-09207e7f5129\") " pod="tigera-operator/tigera-operator-747864d56d-8tm85" Jul 7 02:15:41.801124 kubelet[4310]: I0707 02:15:41.801127 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbcrx\" (UniqueName: \"kubernetes.io/projected/9582b335-70b2-40bf-b05b-09207e7f5129-kube-api-access-sbcrx\") pod \"tigera-operator-747864d56d-8tm85\" (UID: \"9582b335-70b2-40bf-b05b-09207e7f5129\") " pod="tigera-operator/tigera-operator-747864d56d-8tm85" Jul 7 02:15:42.059041 containerd[2803]: time="2025-07-07T02:15:42.058928043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8tm85,Uid:9582b335-70b2-40bf-b05b-09207e7f5129,Namespace:tigera-operator,Attempt:0,}" Jul 7 02:15:42.068333 containerd[2803]: time="2025-07-07T02:15:42.068303120Z" level=info msg="connecting to shim 56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3" address="unix:///run/containerd/s/2d3e947a75677f23830d012d601f78a26f9c16ee0f8ee324eee8ae8eef133e06" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:42.091793 systemd[1]: Started cri-containerd-56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3.scope - libcontainer container 56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3. Jul 7 02:15:42.118106 containerd[2803]: time="2025-07-07T02:15:42.118074965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8tm85,Uid:9582b335-70b2-40bf-b05b-09207e7f5129,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3\"" Jul 7 02:15:42.119268 containerd[2803]: time="2025-07-07T02:15:42.119246540Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 02:15:42.582086 kubelet[4310]: I0707 02:15:42.582009 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rwmbb" podStartSLOduration=2.58199004 podStartE2EDuration="2.58199004s" podCreationTimestamp="2025-07-07 02:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:15:42.581916444 +0000 UTC m=+9.096561579" watchObservedRunningTime="2025-07-07 02:15:42.58199004 +0000 UTC m=+9.096635215" Jul 7 02:15:43.191106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044663051.mount: Deactivated successfully. Jul 7 02:15:43.530221 containerd[2803]: time="2025-07-07T02:15:43.530122747Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:43.530494 containerd[2803]: time="2025-07-07T02:15:43.530119148Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 02:15:43.530871 containerd[2803]: time="2025-07-07T02:15:43.530854601Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:43.532480 containerd[2803]: time="2025-07-07T02:15:43.532456478Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:43.533155 containerd[2803]: time="2025-07-07T02:15:43.533129363Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.413850002s" Jul 7 02:15:43.533184 containerd[2803]: time="2025-07-07T02:15:43.533160307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 02:15:43.534642 containerd[2803]: time="2025-07-07T02:15:43.534623497Z" level=info msg="CreateContainer within sandbox \"56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 02:15:43.538407 containerd[2803]: time="2025-07-07T02:15:43.538378839Z" level=info msg="Container 906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:43.541013 containerd[2803]: time="2025-07-07T02:15:43.540986106Z" level=info msg="CreateContainer within sandbox \"56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0\"" Jul 7 02:15:43.541347 containerd[2803]: time="2025-07-07T02:15:43.541328126Z" level=info msg="StartContainer for \"906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0\"" Jul 7 02:15:43.542039 containerd[2803]: time="2025-07-07T02:15:43.542019962Z" level=info msg="connecting to shim 906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0" address="unix:///run/containerd/s/2d3e947a75677f23830d012d601f78a26f9c16ee0f8ee324eee8ae8eef133e06" protocol=ttrpc version=3 Jul 7 02:15:43.571799 systemd[1]: Started cri-containerd-906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0.scope - libcontainer container 906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0. Jul 7 02:15:43.592570 containerd[2803]: time="2025-07-07T02:15:43.592537480Z" level=info msg="StartContainer for \"906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0\" returns successfully" Jul 7 02:15:47.656114 kubelet[4310]: I0707 02:15:47.656054 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8tm85" podStartSLOduration=5.241237438 podStartE2EDuration="6.656039228s" podCreationTimestamp="2025-07-07 02:15:41 +0000 UTC" firstStartedPulling="2025-07-07 02:15:42.118860972 +0000 UTC m=+8.633506107" lastFinishedPulling="2025-07-07 02:15:43.533662762 +0000 UTC m=+10.048307897" observedRunningTime="2025-07-07 02:15:44.584693414 +0000 UTC m=+11.099338549" watchObservedRunningTime="2025-07-07 02:15:47.656039228 +0000 UTC m=+14.170684363" Jul 7 02:15:48.245906 sudo[3085]: pam_unix(sudo:session): session closed for user root Jul 7 02:15:48.307513 sshd[3084]: Connection closed by 139.178.68.195 port 40882 Jul 7 02:15:48.307917 sshd-session[3082]: pam_unix(sshd:session): session closed for user core Jul 7 02:15:48.311028 systemd[1]: sshd@6-147.28.151.230:22-139.178.68.195:40882.service: Deactivated successfully. Jul 7 02:15:48.314118 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 02:15:48.314309 systemd[1]: session-9.scope: Consumed 6.073s CPU time, 246.6M memory peak. Jul 7 02:15:48.315404 systemd-logind[2788]: Session 9 logged out. Waiting for processes to exit. Jul 7 02:15:48.316222 systemd-logind[2788]: Removed session 9. Jul 7 02:15:49.133628 update_engine[2798]: I20250707 02:15:49.133534 2798 update_attempter.cc:509] Updating boot flags... Jul 7 02:15:54.452530 systemd[1]: Created slice kubepods-besteffort-podc976dad1_bdd4_4de4_bc9e_cc3b02cd1796.slice - libcontainer container kubepods-besteffort-podc976dad1_bdd4_4de4_bc9e_cc3b02cd1796.slice. Jul 7 02:15:54.483037 kubelet[4310]: I0707 02:15:54.482997 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c976dad1-bdd4-4de4-bc9e-cc3b02cd1796-typha-certs\") pod \"calico-typha-7ffbbfbf4d-jpjs4\" (UID: \"c976dad1-bdd4-4de4-bc9e-cc3b02cd1796\") " pod="calico-system/calico-typha-7ffbbfbf4d-jpjs4" Jul 7 02:15:54.483037 kubelet[4310]: I0707 02:15:54.483035 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c976dad1-bdd4-4de4-bc9e-cc3b02cd1796-tigera-ca-bundle\") pod \"calico-typha-7ffbbfbf4d-jpjs4\" (UID: \"c976dad1-bdd4-4de4-bc9e-cc3b02cd1796\") " pod="calico-system/calico-typha-7ffbbfbf4d-jpjs4" Jul 7 02:15:54.483395 kubelet[4310]: I0707 02:15:54.483052 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrb2\" (UniqueName: \"kubernetes.io/projected/c976dad1-bdd4-4de4-bc9e-cc3b02cd1796-kube-api-access-nmrb2\") pod \"calico-typha-7ffbbfbf4d-jpjs4\" (UID: \"c976dad1-bdd4-4de4-bc9e-cc3b02cd1796\") " pod="calico-system/calico-typha-7ffbbfbf4d-jpjs4" Jul 7 02:15:54.754730 containerd[2803]: time="2025-07-07T02:15:54.754638545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ffbbfbf4d-jpjs4,Uid:c976dad1-bdd4-4de4-bc9e-cc3b02cd1796,Namespace:calico-system,Attempt:0,}" Jul 7 02:15:54.763720 containerd[2803]: time="2025-07-07T02:15:54.763273474Z" level=info msg="connecting to shim 5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c" address="unix:///run/containerd/s/b4c4f42c4fa4c1940c99f8b2dda13e3a8f4ede3984ebde41f67fb2da456688f3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:54.764214 systemd[1]: Created slice kubepods-besteffort-pod97ed165b_01f0_471f_8916_b7ae9952fde8.slice - libcontainer container kubepods-besteffort-pod97ed165b_01f0_471f_8916_b7ae9952fde8.slice. Jul 7 02:15:54.785143 kubelet[4310]: I0707 02:15:54.784917 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-cni-bin-dir\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785143 kubelet[4310]: I0707 02:15:54.784978 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-var-run-calico\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785143 kubelet[4310]: I0707 02:15:54.785006 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-xtables-lock\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785143 kubelet[4310]: I0707 02:15:54.785028 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-cni-net-dir\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785143 kubelet[4310]: I0707 02:15:54.785042 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-cni-log-dir\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785335 kubelet[4310]: I0707 02:15:54.785057 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-lib-modules\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785335 kubelet[4310]: I0707 02:15:54.785075 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-var-lib-calico\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785335 kubelet[4310]: I0707 02:15:54.785111 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-flexvol-driver-host\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785335 kubelet[4310]: I0707 02:15:54.785127 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97ed165b-01f0-471f-8916-b7ae9952fde8-policysync\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785335 kubelet[4310]: I0707 02:15:54.785175 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ed165b-01f0-471f-8916-b7ae9952fde8-tigera-ca-bundle\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785431 kubelet[4310]: I0707 02:15:54.785216 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97ed165b-01f0-471f-8916-b7ae9952fde8-node-certs\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.785431 kubelet[4310]: I0707 02:15:54.785232 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzslf\" (UniqueName: \"kubernetes.io/projected/97ed165b-01f0-471f-8916-b7ae9952fde8-kube-api-access-fzslf\") pod \"calico-node-cs4lm\" (UID: \"97ed165b-01f0-471f-8916-b7ae9952fde8\") " pod="calico-system/calico-node-cs4lm" Jul 7 02:15:54.788798 systemd[1]: Started cri-containerd-5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c.scope - libcontainer container 5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c. Jul 7 02:15:54.814669 containerd[2803]: time="2025-07-07T02:15:54.814637457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ffbbfbf4d-jpjs4,Uid:c976dad1-bdd4-4de4-bc9e-cc3b02cd1796,Namespace:calico-system,Attempt:0,} returns sandbox id \"5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c\"" Jul 7 02:15:54.815660 containerd[2803]: time="2025-07-07T02:15:54.815634557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 02:15:54.887118 kubelet[4310]: E0707 02:15:54.887096 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:54.887118 kubelet[4310]: W0707 02:15:54.887115 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:54.887215 kubelet[4310]: E0707 02:15:54.887135 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:54.888745 kubelet[4310]: E0707 02:15:54.888728 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:54.888773 kubelet[4310]: W0707 02:15:54.888746 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:54.888773 kubelet[4310]: E0707 02:15:54.888762 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:54.894784 kubelet[4310]: E0707 02:15:54.894769 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:54.894816 kubelet[4310]: W0707 02:15:54.894784 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:54.894816 kubelet[4310]: E0707 02:15:54.894797 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:54.989272 kubelet[4310]: E0707 02:15:54.989236 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4gts" podUID="620039ac-83fb-48da-b837-6597b522186c" Jul 7 02:15:55.067221 containerd[2803]: time="2025-07-07T02:15:55.067149234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cs4lm,Uid:97ed165b-01f0-471f-8916-b7ae9952fde8,Namespace:calico-system,Attempt:0,}" Jul 7 02:15:55.068518 kubelet[4310]: E0707 02:15:55.068501 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.068518 kubelet[4310]: W0707 02:15:55.068517 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.068596 kubelet[4310]: E0707 02:15:55.068532 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.068757 kubelet[4310]: E0707 02:15:55.068745 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.068787 kubelet[4310]: W0707 02:15:55.068753 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.068809 kubelet[4310]: E0707 02:15:55.068790 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.068982 kubelet[4310]: E0707 02:15:55.068972 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.068982 kubelet[4310]: W0707 02:15:55.068979 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.069030 kubelet[4310]: E0707 02:15:55.068987 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.069160 kubelet[4310]: E0707 02:15:55.069150 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.069160 kubelet[4310]: W0707 02:15:55.069156 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.069201 kubelet[4310]: E0707 02:15:55.069164 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.069365 kubelet[4310]: E0707 02:15:55.069355 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.069365 kubelet[4310]: W0707 02:15:55.069362 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.069458 kubelet[4310]: E0707 02:15:55.069369 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.069537 kubelet[4310]: E0707 02:15:55.069527 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.069537 kubelet[4310]: W0707 02:15:55.069533 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.069577 kubelet[4310]: E0707 02:15:55.069540 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.069705 kubelet[4310]: E0707 02:15:55.069698 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.069732 kubelet[4310]: W0707 02:15:55.069705 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.069732 kubelet[4310]: E0707 02:15:55.069712 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.069863 kubelet[4310]: E0707 02:15:55.069856 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.069885 kubelet[4310]: W0707 02:15:55.069865 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.069885 kubelet[4310]: E0707 02:15:55.069873 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.070043 kubelet[4310]: E0707 02:15:55.070035 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.070066 kubelet[4310]: W0707 02:15:55.070043 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.070066 kubelet[4310]: E0707 02:15:55.070050 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.070205 kubelet[4310]: E0707 02:15:55.070198 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.070227 kubelet[4310]: W0707 02:15:55.070205 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.070227 kubelet[4310]: E0707 02:15:55.070214 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.070356 kubelet[4310]: E0707 02:15:55.070348 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.070380 kubelet[4310]: W0707 02:15:55.070359 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.070380 kubelet[4310]: E0707 02:15:55.070367 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.070525 kubelet[4310]: E0707 02:15:55.070516 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.070546 kubelet[4310]: W0707 02:15:55.070527 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.070546 kubelet[4310]: E0707 02:15:55.070536 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.070762 kubelet[4310]: E0707 02:15:55.070754 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.070787 kubelet[4310]: W0707 02:15:55.070762 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.070787 kubelet[4310]: E0707 02:15:55.070771 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.070920 kubelet[4310]: E0707 02:15:55.070912 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.070943 kubelet[4310]: W0707 02:15:55.070920 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.070943 kubelet[4310]: E0707 02:15:55.070931 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.071111 kubelet[4310]: E0707 02:15:55.071102 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.071133 kubelet[4310]: W0707 02:15:55.071111 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.071133 kubelet[4310]: E0707 02:15:55.071121 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.071313 kubelet[4310]: E0707 02:15:55.071304 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.071333 kubelet[4310]: W0707 02:15:55.071313 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.071333 kubelet[4310]: E0707 02:15:55.071322 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.071496 kubelet[4310]: E0707 02:15:55.071487 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.071518 kubelet[4310]: W0707 02:15:55.071497 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.071518 kubelet[4310]: E0707 02:15:55.071506 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.071685 kubelet[4310]: E0707 02:15:55.071675 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.071704 kubelet[4310]: W0707 02:15:55.071686 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.071704 kubelet[4310]: E0707 02:15:55.071695 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.071824 kubelet[4310]: E0707 02:15:55.071816 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.071846 kubelet[4310]: W0707 02:15:55.071824 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.071846 kubelet[4310]: E0707 02:15:55.071831 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.072003 kubelet[4310]: E0707 02:15:55.071996 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.072024 kubelet[4310]: W0707 02:15:55.072002 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.072024 kubelet[4310]: E0707 02:15:55.072010 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.074722 containerd[2803]: time="2025-07-07T02:15:55.074698913Z" level=info msg="connecting to shim 645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e" address="unix:///run/containerd/s/5f7efaa5f805765697b7569a09b49ac0039e5a3f1f5ddda08cc56c5eea8806f7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:15:55.087358 kubelet[4310]: E0707 02:15:55.087338 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.087358 kubelet[4310]: W0707 02:15:55.087355 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.087416 kubelet[4310]: E0707 02:15:55.087368 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.087416 kubelet[4310]: I0707 02:15:55.087388 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/620039ac-83fb-48da-b837-6597b522186c-socket-dir\") pod \"csi-node-driver-v4gts\" (UID: \"620039ac-83fb-48da-b837-6597b522186c\") " pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:55.087632 kubelet[4310]: E0707 02:15:55.087613 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.087654 kubelet[4310]: W0707 02:15:55.087630 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.087654 kubelet[4310]: E0707 02:15:55.087648 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.087871 kubelet[4310]: E0707 02:15:55.087859 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.087871 kubelet[4310]: W0707 02:15:55.087868 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.087916 kubelet[4310]: E0707 02:15:55.087879 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.088093 kubelet[4310]: E0707 02:15:55.088082 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.088093 kubelet[4310]: W0707 02:15:55.088090 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.088138 kubelet[4310]: E0707 02:15:55.088098 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.088138 kubelet[4310]: I0707 02:15:55.088119 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/620039ac-83fb-48da-b837-6597b522186c-registration-dir\") pod \"csi-node-driver-v4gts\" (UID: \"620039ac-83fb-48da-b837-6597b522186c\") " pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:55.088313 kubelet[4310]: E0707 02:15:55.088303 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.088337 kubelet[4310]: W0707 02:15:55.088313 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.088337 kubelet[4310]: E0707 02:15:55.088324 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.088374 kubelet[4310]: I0707 02:15:55.088337 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72m22\" (UniqueName: \"kubernetes.io/projected/620039ac-83fb-48da-b837-6597b522186c-kube-api-access-72m22\") pod \"csi-node-driver-v4gts\" (UID: \"620039ac-83fb-48da-b837-6597b522186c\") " pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:55.088526 kubelet[4310]: E0707 02:15:55.088517 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.088548 kubelet[4310]: W0707 02:15:55.088526 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.088548 kubelet[4310]: E0707 02:15:55.088537 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.088587 kubelet[4310]: I0707 02:15:55.088550 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/620039ac-83fb-48da-b837-6597b522186c-kubelet-dir\") pod \"csi-node-driver-v4gts\" (UID: \"620039ac-83fb-48da-b837-6597b522186c\") " pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:55.088742 kubelet[4310]: E0707 02:15:55.088732 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.088768 kubelet[4310]: W0707 02:15:55.088742 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.088768 kubelet[4310]: E0707 02:15:55.088753 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.088806 kubelet[4310]: I0707 02:15:55.088766 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/620039ac-83fb-48da-b837-6597b522186c-varrun\") pod \"csi-node-driver-v4gts\" (UID: \"620039ac-83fb-48da-b837-6597b522186c\") " pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:55.088977 kubelet[4310]: E0707 02:15:55.088967 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.088998 kubelet[4310]: W0707 02:15:55.088977 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.088998 kubelet[4310]: E0707 02:15:55.088990 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.089167 kubelet[4310]: E0707 02:15:55.089159 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.089191 kubelet[4310]: W0707 02:15:55.089167 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.089191 kubelet[4310]: E0707 02:15:55.089178 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.089361 kubelet[4310]: E0707 02:15:55.089353 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.089381 kubelet[4310]: W0707 02:15:55.089361 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.089381 kubelet[4310]: E0707 02:15:55.089372 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.089546 kubelet[4310]: E0707 02:15:55.089538 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.089566 kubelet[4310]: W0707 02:15:55.089545 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.089566 kubelet[4310]: E0707 02:15:55.089555 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.089742 kubelet[4310]: E0707 02:15:55.089733 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.089765 kubelet[4310]: W0707 02:15:55.089742 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.089765 kubelet[4310]: E0707 02:15:55.089753 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.089963 kubelet[4310]: E0707 02:15:55.089955 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.089983 kubelet[4310]: W0707 02:15:55.089963 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.089983 kubelet[4310]: E0707 02:15:55.089980 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.090142 kubelet[4310]: E0707 02:15:55.090134 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.090161 kubelet[4310]: W0707 02:15:55.090142 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.090161 kubelet[4310]: E0707 02:15:55.090149 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.090321 kubelet[4310]: E0707 02:15:55.090314 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.090342 kubelet[4310]: W0707 02:15:55.090321 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.090342 kubelet[4310]: E0707 02:15:55.090329 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.104856 systemd[1]: Started cri-containerd-645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e.scope - libcontainer container 645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e. Jul 7 02:15:55.124903 containerd[2803]: time="2025-07-07T02:15:55.124865403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cs4lm,Uid:97ed165b-01f0-471f-8916-b7ae9952fde8,Namespace:calico-system,Attempt:0,} returns sandbox id \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\"" Jul 7 02:15:55.190014 kubelet[4310]: E0707 02:15:55.189993 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.190014 kubelet[4310]: W0707 02:15:55.190012 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.190071 kubelet[4310]: E0707 02:15:55.190029 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.190245 kubelet[4310]: E0707 02:15:55.190234 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.190245 kubelet[4310]: W0707 02:15:55.190242 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.190291 kubelet[4310]: E0707 02:15:55.190254 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.190452 kubelet[4310]: E0707 02:15:55.190444 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.190475 kubelet[4310]: W0707 02:15:55.190452 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.190475 kubelet[4310]: E0707 02:15:55.190463 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.190695 kubelet[4310]: E0707 02:15:55.190675 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.190721 kubelet[4310]: W0707 02:15:55.190696 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.190721 kubelet[4310]: E0707 02:15:55.190714 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.190929 kubelet[4310]: E0707 02:15:55.190921 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.190955 kubelet[4310]: W0707 02:15:55.190928 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.190955 kubelet[4310]: E0707 02:15:55.190939 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.191129 kubelet[4310]: E0707 02:15:55.191121 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.191129 kubelet[4310]: W0707 02:15:55.191129 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.191171 kubelet[4310]: E0707 02:15:55.191138 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.191364 kubelet[4310]: E0707 02:15:55.191356 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.191384 kubelet[4310]: W0707 02:15:55.191364 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.191384 kubelet[4310]: E0707 02:15:55.191374 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.191520 kubelet[4310]: E0707 02:15:55.191512 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.191541 kubelet[4310]: W0707 02:15:55.191520 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.191541 kubelet[4310]: E0707 02:15:55.191530 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.191671 kubelet[4310]: E0707 02:15:55.191663 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.191704 kubelet[4310]: W0707 02:15:55.191671 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.191729 kubelet[4310]: E0707 02:15:55.191696 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.191842 kubelet[4310]: E0707 02:15:55.191834 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.191867 kubelet[4310]: W0707 02:15:55.191841 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.191867 kubelet[4310]: E0707 02:15:55.191856 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.192055 kubelet[4310]: E0707 02:15:55.192047 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.192075 kubelet[4310]: W0707 02:15:55.192054 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.192110 kubelet[4310]: E0707 02:15:55.192082 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.192224 kubelet[4310]: E0707 02:15:55.192216 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.192251 kubelet[4310]: W0707 02:15:55.192224 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.192251 kubelet[4310]: E0707 02:15:55.192235 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.192449 kubelet[4310]: E0707 02:15:55.192437 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.192470 kubelet[4310]: W0707 02:15:55.192450 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.192470 kubelet[4310]: E0707 02:15:55.192464 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.192636 kubelet[4310]: E0707 02:15:55.192626 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.192662 kubelet[4310]: W0707 02:15:55.192639 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.192662 kubelet[4310]: E0707 02:15:55.192652 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.194064 kubelet[4310]: E0707 02:15:55.194037 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.195205 kubelet[4310]: W0707 02:15:55.194106 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.195205 kubelet[4310]: E0707 02:15:55.195192 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.195375 kubelet[4310]: E0707 02:15:55.195364 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.195375 kubelet[4310]: W0707 02:15:55.195374 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.195426 kubelet[4310]: E0707 02:15:55.195404 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.195516 kubelet[4310]: E0707 02:15:55.195508 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.195537 kubelet[4310]: W0707 02:15:55.195516 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.195556 kubelet[4310]: E0707 02:15:55.195536 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.195668 kubelet[4310]: E0707 02:15:55.195660 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.195704 kubelet[4310]: W0707 02:15:55.195667 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.195704 kubelet[4310]: E0707 02:15:55.195696 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.195796 kubelet[4310]: E0707 02:15:55.195788 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.195818 kubelet[4310]: W0707 02:15:55.195796 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.195847 kubelet[4310]: E0707 02:15:55.195822 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.195942 kubelet[4310]: E0707 02:15:55.195934 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.195969 kubelet[4310]: W0707 02:15:55.195942 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.195969 kubelet[4310]: E0707 02:15:55.195954 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.196261 kubelet[4310]: E0707 02:15:55.196246 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.196296 kubelet[4310]: W0707 02:15:55.196261 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.196296 kubelet[4310]: E0707 02:15:55.196276 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.196545 kubelet[4310]: E0707 02:15:55.196534 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.196585 kubelet[4310]: W0707 02:15:55.196545 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.196585 kubelet[4310]: E0707 02:15:55.196558 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.196774 kubelet[4310]: E0707 02:15:55.196763 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.196808 kubelet[4310]: W0707 02:15:55.196774 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.196808 kubelet[4310]: E0707 02:15:55.196783 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.196969 kubelet[4310]: E0707 02:15:55.196959 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.196969 kubelet[4310]: W0707 02:15:55.196969 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.197025 kubelet[4310]: E0707 02:15:55.196978 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.197162 kubelet[4310]: E0707 02:15:55.197152 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.197162 kubelet[4310]: W0707 02:15:55.197161 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.197214 kubelet[4310]: E0707 02:15:55.197170 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.204692 kubelet[4310]: E0707 02:15:55.204658 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.204692 kubelet[4310]: W0707 02:15:55.204675 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.204692 kubelet[4310]: E0707 02:15:55.204694 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.498763 containerd[2803]: time="2025-07-07T02:15:55.498729549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:55.498860 containerd[2803]: time="2025-07-07T02:15:55.498732149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 02:15:55.499387 containerd[2803]: time="2025-07-07T02:15:55.499367448Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:55.500754 containerd[2803]: time="2025-07-07T02:15:55.500734158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:55.501374 containerd[2803]: time="2025-07-07T02:15:55.501346979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 685.679506ms" Jul 7 02:15:55.501399 containerd[2803]: time="2025-07-07T02:15:55.501376377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 02:15:55.502427 containerd[2803]: time="2025-07-07T02:15:55.502402559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 02:15:55.506702 containerd[2803]: time="2025-07-07T02:15:55.506667991Z" level=info msg="CreateContainer within sandbox \"5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 02:15:55.510436 containerd[2803]: time="2025-07-07T02:15:55.510404795Z" level=info msg="Container 4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:55.513676 containerd[2803]: time="2025-07-07T02:15:55.513650925Z" level=info msg="CreateContainer within sandbox \"5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a\"" Jul 7 02:15:55.513958 containerd[2803]: time="2025-07-07T02:15:55.513938737Z" level=info msg="StartContainer for \"4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a\"" Jul 7 02:15:55.514895 containerd[2803]: time="2025-07-07T02:15:55.514873248Z" level=info msg="connecting to shim 4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a" address="unix:///run/containerd/s/b4c4f42c4fa4c1940c99f8b2dda13e3a8f4ede3984ebde41f67fb2da456688f3" protocol=ttrpc version=3 Jul 7 02:15:55.543873 systemd[1]: Started cri-containerd-4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a.scope - libcontainer container 4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a. Jul 7 02:15:55.572748 containerd[2803]: time="2025-07-07T02:15:55.572717286Z" level=info msg="StartContainer for \"4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a\" returns successfully" Jul 7 02:15:55.604526 kubelet[4310]: I0707 02:15:55.604483 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ffbbfbf4d-jpjs4" podStartSLOduration=0.917920832 podStartE2EDuration="1.604468574s" podCreationTimestamp="2025-07-07 02:15:54 +0000 UTC" firstStartedPulling="2025-07-07 02:15:54.815445176 +0000 UTC m=+21.330090311" lastFinishedPulling="2025-07-07 02:15:55.501992918 +0000 UTC m=+22.016638053" observedRunningTime="2025-07-07 02:15:55.604457295 +0000 UTC m=+22.119102430" watchObservedRunningTime="2025-07-07 02:15:55.604468574 +0000 UTC m=+22.119113709" Jul 7 02:15:55.674381 kubelet[4310]: E0707 02:15:55.674342 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.674381 kubelet[4310]: W0707 02:15:55.674363 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.674381 kubelet[4310]: E0707 02:15:55.674385 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.674586 kubelet[4310]: E0707 02:15:55.674566 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.674608 kubelet[4310]: W0707 02:15:55.674573 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.674630 kubelet[4310]: E0707 02:15:55.674610 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.674792 kubelet[4310]: E0707 02:15:55.674779 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.674792 kubelet[4310]: W0707 02:15:55.674786 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.674839 kubelet[4310]: E0707 02:15:55.674793 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.674996 kubelet[4310]: E0707 02:15:55.674987 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.674996 kubelet[4310]: W0707 02:15:55.674995 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675039 kubelet[4310]: E0707 02:15:55.675002 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.675174 kubelet[4310]: E0707 02:15:55.675162 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.675174 kubelet[4310]: W0707 02:15:55.675169 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675214 kubelet[4310]: E0707 02:15:55.675177 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.675341 kubelet[4310]: E0707 02:15:55.675333 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.675341 kubelet[4310]: W0707 02:15:55.675340 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675382 kubelet[4310]: E0707 02:15:55.675347 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.675521 kubelet[4310]: E0707 02:15:55.675513 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.675542 kubelet[4310]: W0707 02:15:55.675522 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675542 kubelet[4310]: E0707 02:15:55.675529 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.675703 kubelet[4310]: E0707 02:15:55.675694 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.675703 kubelet[4310]: W0707 02:15:55.675702 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675762 kubelet[4310]: E0707 02:15:55.675709 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.675845 kubelet[4310]: E0707 02:15:55.675836 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.675845 kubelet[4310]: W0707 02:15:55.675844 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675889 kubelet[4310]: E0707 02:15:55.675851 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.675966 kubelet[4310]: E0707 02:15:55.675959 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.675987 kubelet[4310]: W0707 02:15:55.675965 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.675987 kubelet[4310]: E0707 02:15:55.675972 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.676093 kubelet[4310]: E0707 02:15:55.676086 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.676115 kubelet[4310]: W0707 02:15:55.676093 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.676115 kubelet[4310]: E0707 02:15:55.676099 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.676263 kubelet[4310]: E0707 02:15:55.676256 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.676284 kubelet[4310]: W0707 02:15:55.676263 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.676284 kubelet[4310]: E0707 02:15:55.676270 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.676402 kubelet[4310]: E0707 02:15:55.676395 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.676427 kubelet[4310]: W0707 02:15:55.676402 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.676427 kubelet[4310]: E0707 02:15:55.676408 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.676576 kubelet[4310]: E0707 02:15:55.676568 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.676596 kubelet[4310]: W0707 02:15:55.676575 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.676596 kubelet[4310]: E0707 02:15:55.676582 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.676748 kubelet[4310]: E0707 02:15:55.676740 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.676748 kubelet[4310]: W0707 02:15:55.676747 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.676790 kubelet[4310]: E0707 02:15:55.676754 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.694994 kubelet[4310]: E0707 02:15:55.694968 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.694994 kubelet[4310]: W0707 02:15:55.694983 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.694994 kubelet[4310]: E0707 02:15:55.694997 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.695160 kubelet[4310]: E0707 02:15:55.695148 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.695160 kubelet[4310]: W0707 02:15:55.695156 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.695204 kubelet[4310]: E0707 02:15:55.695167 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.695320 kubelet[4310]: E0707 02:15:55.695309 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.695320 kubelet[4310]: W0707 02:15:55.695317 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.695362 kubelet[4310]: E0707 02:15:55.695327 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.695533 kubelet[4310]: E0707 02:15:55.695521 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.695533 kubelet[4310]: W0707 02:15:55.695531 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.695576 kubelet[4310]: E0707 02:15:55.695542 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.695690 kubelet[4310]: E0707 02:15:55.695676 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.695711 kubelet[4310]: W0707 02:15:55.695690 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.695711 kubelet[4310]: E0707 02:15:55.695700 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.695831 kubelet[4310]: E0707 02:15:55.695823 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.695831 kubelet[4310]: W0707 02:15:55.695830 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.695868 kubelet[4310]: E0707 02:15:55.695840 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.695987 kubelet[4310]: E0707 02:15:55.695978 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696011 kubelet[4310]: W0707 02:15:55.695986 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696011 kubelet[4310]: E0707 02:15:55.695996 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.696197 kubelet[4310]: E0707 02:15:55.696186 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696220 kubelet[4310]: W0707 02:15:55.696197 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696220 kubelet[4310]: E0707 02:15:55.696209 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.696348 kubelet[4310]: E0707 02:15:55.696339 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696368 kubelet[4310]: W0707 02:15:55.696348 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696368 kubelet[4310]: E0707 02:15:55.696359 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.696480 kubelet[4310]: E0707 02:15:55.696472 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696501 kubelet[4310]: W0707 02:15:55.696480 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696501 kubelet[4310]: E0707 02:15:55.696490 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.696615 kubelet[4310]: E0707 02:15:55.696607 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696639 kubelet[4310]: W0707 02:15:55.696614 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696639 kubelet[4310]: E0707 02:15:55.696624 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.696748 kubelet[4310]: E0707 02:15:55.696740 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696769 kubelet[4310]: W0707 02:15:55.696748 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696769 kubelet[4310]: E0707 02:15:55.696758 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.696886 kubelet[4310]: E0707 02:15:55.696877 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.696886 kubelet[4310]: W0707 02:15:55.696885 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.696927 kubelet[4310]: E0707 02:15:55.696894 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.697016 kubelet[4310]: E0707 02:15:55.697008 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.697036 kubelet[4310]: W0707 02:15:55.697016 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.697036 kubelet[4310]: E0707 02:15:55.697025 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.697159 kubelet[4310]: E0707 02:15:55.697152 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.697180 kubelet[4310]: W0707 02:15:55.697159 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.697180 kubelet[4310]: E0707 02:15:55.697169 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.697305 kubelet[4310]: E0707 02:15:55.697298 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.697326 kubelet[4310]: W0707 02:15:55.697307 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.697326 kubelet[4310]: E0707 02:15:55.697314 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.697455 kubelet[4310]: E0707 02:15:55.697446 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.697478 kubelet[4310]: W0707 02:15:55.697455 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.697478 kubelet[4310]: E0707 02:15:55.697464 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.697709 kubelet[4310]: E0707 02:15:55.697698 4310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:15:55.697737 kubelet[4310]: W0707 02:15:55.697710 4310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:15:55.697737 kubelet[4310]: E0707 02:15:55.697720 4310 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:15:55.842654 containerd[2803]: time="2025-07-07T02:15:55.842617877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:55.842988 containerd[2803]: time="2025-07-07T02:15:55.842697350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 02:15:55.843305 containerd[2803]: time="2025-07-07T02:15:55.843287693Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:55.844792 containerd[2803]: time="2025-07-07T02:15:55.844763632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:55.845393 containerd[2803]: time="2025-07-07T02:15:55.845368535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 342.934939ms" Jul 7 02:15:55.845423 containerd[2803]: time="2025-07-07T02:15:55.845399492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 02:15:55.846821 containerd[2803]: time="2025-07-07T02:15:55.846801358Z" level=info msg="CreateContainer within sandbox \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 02:15:55.850806 containerd[2803]: time="2025-07-07T02:15:55.850780818Z" level=info msg="Container 4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:55.854640 containerd[2803]: time="2025-07-07T02:15:55.854617851Z" level=info msg="CreateContainer within sandbox \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\"" Jul 7 02:15:55.854977 containerd[2803]: time="2025-07-07T02:15:55.854958779Z" level=info msg="StartContainer for \"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\"" Jul 7 02:15:55.856336 containerd[2803]: time="2025-07-07T02:15:55.856306250Z" level=info msg="connecting to shim 4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af" address="unix:///run/containerd/s/5f7efaa5f805765697b7569a09b49ac0039e5a3f1f5ddda08cc56c5eea8806f7" protocol=ttrpc version=3 Jul 7 02:15:55.878796 systemd[1]: Started cri-containerd-4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af.scope - libcontainer container 4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af. Jul 7 02:15:55.905226 containerd[2803]: time="2025-07-07T02:15:55.905200542Z" level=info msg="StartContainer for \"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\" returns successfully" Jul 7 02:15:55.915282 systemd[1]: cri-containerd-4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af.scope: Deactivated successfully. Jul 7 02:15:55.916943 containerd[2803]: time="2025-07-07T02:15:55.916918983Z" level=info msg="received exit event container_id:\"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\" id:\"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\" pid:5357 exited_at:{seconds:1751854555 nanos:916657928}" Jul 7 02:15:55.917112 containerd[2803]: time="2025-07-07T02:15:55.917096006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\" id:\"4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af\" pid:5357 exited_at:{seconds:1751854555 nanos:916657928}" Jul 7 02:15:55.932027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af-rootfs.mount: Deactivated successfully. Jul 7 02:15:56.556611 kubelet[4310]: E0707 02:15:56.556579 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4gts" podUID="620039ac-83fb-48da-b837-6597b522186c" Jul 7 02:15:56.601299 containerd[2803]: time="2025-07-07T02:15:56.601264837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 02:15:57.914135 containerd[2803]: time="2025-07-07T02:15:57.914096436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:57.914447 containerd[2803]: time="2025-07-07T02:15:57.914178029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 02:15:57.914726 containerd[2803]: time="2025-07-07T02:15:57.914707064Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:57.916233 containerd[2803]: time="2025-07-07T02:15:57.916213575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:15:57.916874 containerd[2803]: time="2025-07-07T02:15:57.916855320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.315554085s" Jul 7 02:15:57.916901 containerd[2803]: time="2025-07-07T02:15:57.916881118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 02:15:57.919251 containerd[2803]: time="2025-07-07T02:15:57.919226636Z" level=info msg="CreateContainer within sandbox \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 02:15:57.923634 containerd[2803]: time="2025-07-07T02:15:57.923612340Z" level=info msg="Container eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:15:57.928193 containerd[2803]: time="2025-07-07T02:15:57.928171029Z" level=info msg="CreateContainer within sandbox \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\"" Jul 7 02:15:57.928536 containerd[2803]: time="2025-07-07T02:15:57.928515999Z" level=info msg="StartContainer for \"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\"" Jul 7 02:15:57.929859 containerd[2803]: time="2025-07-07T02:15:57.929837006Z" level=info msg="connecting to shim eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4" address="unix:///run/containerd/s/5f7efaa5f805765697b7569a09b49ac0039e5a3f1f5ddda08cc56c5eea8806f7" protocol=ttrpc version=3 Jul 7 02:15:57.959802 systemd[1]: Started cri-containerd-eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4.scope - libcontainer container eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4. Jul 7 02:15:57.987428 containerd[2803]: time="2025-07-07T02:15:57.987401786Z" level=info msg="StartContainer for \"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\" returns successfully" Jul 7 02:15:58.375522 containerd[2803]: time="2025-07-07T02:15:58.375486928Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 02:15:58.377181 systemd[1]: cri-containerd-eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4.scope: Deactivated successfully. Jul 7 02:15:58.377556 systemd[1]: cri-containerd-eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4.scope: Consumed 919ms CPU time, 200.2M memory peak, 165.8M written to disk. Jul 7 02:15:58.377986 containerd[2803]: time="2025-07-07T02:15:58.377961807Z" level=info msg="received exit event container_id:\"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\" id:\"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\" pid:5421 exited_at:{seconds:1751854558 nanos:377833657}" Jul 7 02:15:58.378491 containerd[2803]: time="2025-07-07T02:15:58.378456406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\" id:\"eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4\" pid:5421 exited_at:{seconds:1751854558 nanos:377833657}" Jul 7 02:15:58.395048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4-rootfs.mount: Deactivated successfully. Jul 7 02:15:58.476631 kubelet[4310]: I0707 02:15:58.476610 4310 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 02:15:58.496704 kubelet[4310]: W0707 02:15:58.496528 4310 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4372.0.1-a-e89e5d604b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372.0.1-a-e89e5d604b' and this object Jul 7 02:15:58.496704 kubelet[4310]: E0707 02:15:58.496570 4310 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4372.0.1-a-e89e5d604b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.0.1-a-e89e5d604b' and this object" logger="UnhandledError" Jul 7 02:15:58.496704 kubelet[4310]: I0707 02:15:58.495859 4310 status_manager.go:890] "Failed to get status for pod" podUID="67f18952-866b-40bb-a879-3ff8e070364a" pod="kube-system/coredns-668d6bf9bc-7hwxq" err="pods \"coredns-668d6bf9bc-7hwxq\" is forbidden: User \"system:node:ci-4372.0.1-a-e89e5d604b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.0.1-a-e89e5d604b' and this object" Jul 7 02:15:58.498691 kubelet[4310]: W0707 02:15:58.497522 4310 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4372.0.1-a-e89e5d604b" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4372.0.1-a-e89e5d604b' and this object Jul 7 02:15:58.498691 kubelet[4310]: E0707 02:15:58.497552 4310 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4372.0.1-a-e89e5d604b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4372.0.1-a-e89e5d604b' and this object" logger="UnhandledError" Jul 7 02:15:58.505194 systemd[1]: Created slice kubepods-burstable-pod67f18952_866b_40bb_a879_3ff8e070364a.slice - libcontainer container kubepods-burstable-pod67f18952_866b_40bb_a879_3ff8e070364a.slice. Jul 7 02:15:58.510645 systemd[1]: Created slice kubepods-besteffort-poda28c1297_214b_4a2e_af80_83a741733d7e.slice - libcontainer container kubepods-besteffort-poda28c1297_214b_4a2e_af80_83a741733d7e.slice. Jul 7 02:15:58.514365 systemd[1]: Created slice kubepods-burstable-pod8402c195_fbc9_4582_be73_2715d432c85d.slice - libcontainer container kubepods-burstable-pod8402c195_fbc9_4582_be73_2715d432c85d.slice. Jul 7 02:15:58.518732 systemd[1]: Created slice kubepods-besteffort-pod76d3f3c8_064e_4919_b78f_8bb726ce492a.slice - libcontainer container kubepods-besteffort-pod76d3f3c8_064e_4919_b78f_8bb726ce492a.slice. Jul 7 02:15:58.523587 systemd[1]: Created slice kubepods-besteffort-pod19e3fd3f_6889_408b_8d4f_0c6f70c06d65.slice - libcontainer container kubepods-besteffort-pod19e3fd3f_6889_408b_8d4f_0c6f70c06d65.slice. Jul 7 02:15:58.526970 systemd[1]: Created slice kubepods-besteffort-pod1bbf5233_af42_4f01_bf6f_df8d8e280cbf.slice - libcontainer container kubepods-besteffort-pod1bbf5233_af42_4f01_bf6f_df8d8e280cbf.slice. Jul 7 02:15:58.530946 systemd[1]: Created slice kubepods-besteffort-pod2debacf4_a52f_4082_9706_631c367fb96a.slice - libcontainer container kubepods-besteffort-pod2debacf4_a52f_4082_9706_631c367fb96a.slice. Jul 7 02:15:58.534878 systemd[1]: Created slice kubepods-besteffort-pod46223083_50d4_419a_b1e4_3cf8215abc31.slice - libcontainer container kubepods-besteffort-pod46223083_50d4_419a_b1e4_3cf8215abc31.slice. Jul 7 02:15:58.560931 systemd[1]: Created slice kubepods-besteffort-pod620039ac_83fb_48da_b837_6597b522186c.slice - libcontainer container kubepods-besteffort-pod620039ac_83fb_48da_b837_6597b522186c.slice. Jul 7 02:15:58.562759 containerd[2803]: time="2025-07-07T02:15:58.562715243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4gts,Uid:620039ac-83fb-48da-b837-6597b522186c,Namespace:calico-system,Attempt:0,}" Jul 7 02:15:58.608745 containerd[2803]: time="2025-07-07T02:15:58.608687180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 02:15:58.612996 kubelet[4310]: I0707 02:15:58.612959 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wbbr\" (UniqueName: \"kubernetes.io/projected/76d3f3c8-064e-4919-b78f-8bb726ce492a-kube-api-access-6wbbr\") pod \"whisker-86588c97d6-9gb7l\" (UID: \"76d3f3c8-064e-4919-b78f-8bb726ce492a\") " pod="calico-system/whisker-86588c97d6-9gb7l" Jul 7 02:15:58.613096 kubelet[4310]: I0707 02:15:58.613003 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-calico-apiserver-certs\") pod \"calico-apiserver-674744f469-z27j7\" (UID: \"1bbf5233-af42-4f01-bf6f-df8d8e280cbf\") " pod="calico-apiserver/calico-apiserver-674744f469-z27j7" Jul 7 02:15:58.613096 kubelet[4310]: I0707 02:15:58.613020 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-calico-apiserver-certs\") pod \"calico-apiserver-674744f469-4r7kc\" (UID: \"19e3fd3f-6889-408b-8d4f-0c6f70c06d65\") " pod="calico-apiserver/calico-apiserver-674744f469-4r7kc" Jul 7 02:15:58.613145 kubelet[4310]: I0707 02:15:58.613098 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46223083-50d4-419a-b1e4-3cf8215abc31-config\") pod \"goldmane-768f4c5c69-rltds\" (UID: \"46223083-50d4-419a-b1e4-3cf8215abc31\") " pod="calico-system/goldmane-768f4c5c69-rltds" Jul 7 02:15:58.613145 kubelet[4310]: I0707 02:15:58.613132 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8402c195-fbc9-4582-be73-2715d432c85d-config-volume\") pod \"coredns-668d6bf9bc-527gm\" (UID: \"8402c195-fbc9-4582-be73-2715d432c85d\") " pod="kube-system/coredns-668d6bf9bc-527gm" Jul 7 02:15:58.613227 kubelet[4310]: I0707 02:15:58.613150 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wdv\" (UniqueName: \"kubernetes.io/projected/a28c1297-214b-4a2e-af80-83a741733d7e-kube-api-access-r8wdv\") pod \"calico-apiserver-fc4f5fbcf-wqtv7\" (UID: \"a28c1297-214b-4a2e-af80-83a741733d7e\") " pod="calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7" Jul 7 02:15:58.613227 kubelet[4310]: I0707 02:15:58.613176 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2debacf4-a52f-4082-9706-631c367fb96a-tigera-ca-bundle\") pod \"calico-kube-controllers-f479b6b87-wdp9r\" (UID: \"2debacf4-a52f-4082-9706-631c367fb96a\") " pod="calico-system/calico-kube-controllers-f479b6b87-wdp9r" Jul 7 02:15:58.613227 kubelet[4310]: I0707 02:15:58.613192 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67f18952-866b-40bb-a879-3ff8e070364a-config-volume\") pod \"coredns-668d6bf9bc-7hwxq\" (UID: \"67f18952-866b-40bb-a879-3ff8e070364a\") " pod="kube-system/coredns-668d6bf9bc-7hwxq" Jul 7 02:15:58.613227 kubelet[4310]: I0707 02:15:58.613211 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-backend-key-pair\") pod \"whisker-86588c97d6-9gb7l\" (UID: \"76d3f3c8-064e-4919-b78f-8bb726ce492a\") " pod="calico-system/whisker-86588c97d6-9gb7l" Jul 7 02:15:58.613308 kubelet[4310]: I0707 02:15:58.613227 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/46223083-50d4-419a-b1e4-3cf8215abc31-goldmane-key-pair\") pod \"goldmane-768f4c5c69-rltds\" (UID: \"46223083-50d4-419a-b1e4-3cf8215abc31\") " pod="calico-system/goldmane-768f4c5c69-rltds" Jul 7 02:15:58.613308 kubelet[4310]: I0707 02:15:58.613243 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdn6z\" (UniqueName: \"kubernetes.io/projected/46223083-50d4-419a-b1e4-3cf8215abc31-kube-api-access-cdn6z\") pod \"goldmane-768f4c5c69-rltds\" (UID: \"46223083-50d4-419a-b1e4-3cf8215abc31\") " pod="calico-system/goldmane-768f4c5c69-rltds" Jul 7 02:15:58.613308 kubelet[4310]: I0707 02:15:58.613260 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-ca-bundle\") pod \"whisker-86588c97d6-9gb7l\" (UID: \"76d3f3c8-064e-4919-b78f-8bb726ce492a\") " pod="calico-system/whisker-86588c97d6-9gb7l" Jul 7 02:15:58.613308 kubelet[4310]: I0707 02:15:58.613297 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a28c1297-214b-4a2e-af80-83a741733d7e-calico-apiserver-certs\") pod \"calico-apiserver-fc4f5fbcf-wqtv7\" (UID: \"a28c1297-214b-4a2e-af80-83a741733d7e\") " pod="calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7" Jul 7 02:15:58.613391 kubelet[4310]: I0707 02:15:58.613320 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56mv4\" (UniqueName: \"kubernetes.io/projected/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-kube-api-access-56mv4\") pod \"calico-apiserver-674744f469-z27j7\" (UID: \"1bbf5233-af42-4f01-bf6f-df8d8e280cbf\") " pod="calico-apiserver/calico-apiserver-674744f469-z27j7" Jul 7 02:15:58.613391 kubelet[4310]: I0707 02:15:58.613351 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfwm\" (UniqueName: \"kubernetes.io/projected/2debacf4-a52f-4082-9706-631c367fb96a-kube-api-access-2kfwm\") pod \"calico-kube-controllers-f479b6b87-wdp9r\" (UID: \"2debacf4-a52f-4082-9706-631c367fb96a\") " pod="calico-system/calico-kube-controllers-f479b6b87-wdp9r" Jul 7 02:15:58.613391 kubelet[4310]: I0707 02:15:58.613372 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpsx\" (UniqueName: \"kubernetes.io/projected/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-kube-api-access-jcpsx\") pod \"calico-apiserver-674744f469-4r7kc\" (UID: \"19e3fd3f-6889-408b-8d4f-0c6f70c06d65\") " pod="calico-apiserver/calico-apiserver-674744f469-4r7kc" Jul 7 02:15:58.613391 kubelet[4310]: I0707 02:15:58.613389 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmc4\" (UniqueName: \"kubernetes.io/projected/8402c195-fbc9-4582-be73-2715d432c85d-kube-api-access-2jmc4\") pod \"coredns-668d6bf9bc-527gm\" (UID: \"8402c195-fbc9-4582-be73-2715d432c85d\") " pod="kube-system/coredns-668d6bf9bc-527gm" Jul 7 02:15:58.613472 kubelet[4310]: I0707 02:15:58.613414 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9882v\" (UniqueName: \"kubernetes.io/projected/67f18952-866b-40bb-a879-3ff8e070364a-kube-api-access-9882v\") pod \"coredns-668d6bf9bc-7hwxq\" (UID: \"67f18952-866b-40bb-a879-3ff8e070364a\") " pod="kube-system/coredns-668d6bf9bc-7hwxq" Jul 7 02:15:58.613472 kubelet[4310]: I0707 02:15:58.613436 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46223083-50d4-419a-b1e4-3cf8215abc31-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-rltds\" (UID: \"46223083-50d4-419a-b1e4-3cf8215abc31\") " pod="calico-system/goldmane-768f4c5c69-rltds" Jul 7 02:15:58.619870 containerd[2803]: time="2025-07-07T02:15:58.619824113Z" level=error msg="Failed to destroy network for sandbox \"c6a7df81f2002b7472cc39e9f17008ccd7fa6a03835c938a524bebe0c5e21d68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.620241 containerd[2803]: time="2025-07-07T02:15:58.620213801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4gts,Uid:620039ac-83fb-48da-b837-6597b522186c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a7df81f2002b7472cc39e9f17008ccd7fa6a03835c938a524bebe0c5e21d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.620386 kubelet[4310]: E0707 02:15:58.620358 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a7df81f2002b7472cc39e9f17008ccd7fa6a03835c938a524bebe0c5e21d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.620430 kubelet[4310]: E0707 02:15:58.620417 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a7df81f2002b7472cc39e9f17008ccd7fa6a03835c938a524bebe0c5e21d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:58.620463 kubelet[4310]: E0707 02:15:58.620437 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a7df81f2002b7472cc39e9f17008ccd7fa6a03835c938a524bebe0c5e21d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v4gts" Jul 7 02:15:58.620495 kubelet[4310]: E0707 02:15:58.620474 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v4gts_calico-system(620039ac-83fb-48da-b837-6597b522186c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v4gts_calico-system(620039ac-83fb-48da-b837-6597b522186c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6a7df81f2002b7472cc39e9f17008ccd7fa6a03835c938a524bebe0c5e21d68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v4gts" podUID="620039ac-83fb-48da-b837-6597b522186c" Jul 7 02:15:58.821256 containerd[2803]: time="2025-07-07T02:15:58.821151440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86588c97d6-9gb7l,Uid:76d3f3c8-064e-4919-b78f-8bb726ce492a,Namespace:calico-system,Attempt:0,}" Jul 7 02:15:58.834310 containerd[2803]: time="2025-07-07T02:15:58.834277131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f479b6b87-wdp9r,Uid:2debacf4-a52f-4082-9706-631c367fb96a,Namespace:calico-system,Attempt:0,}" Jul 7 02:15:58.837753 containerd[2803]: time="2025-07-07T02:15:58.837726090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rltds,Uid:46223083-50d4-419a-b1e4-3cf8215abc31,Namespace:calico-system,Attempt:0,}" Jul 7 02:15:58.859789 containerd[2803]: time="2025-07-07T02:15:58.859746897Z" level=error msg="Failed to destroy network for sandbox \"0e418106de5eee25898cf06573a288a3d8403d105f595963f6cccf56dfbcab59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.860148 containerd[2803]: time="2025-07-07T02:15:58.860121347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86588c97d6-9gb7l,Uid:76d3f3c8-064e-4919-b78f-8bb726ce492a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e418106de5eee25898cf06573a288a3d8403d105f595963f6cccf56dfbcab59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.860319 kubelet[4310]: E0707 02:15:58.860290 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e418106de5eee25898cf06573a288a3d8403d105f595963f6cccf56dfbcab59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.860363 kubelet[4310]: E0707 02:15:58.860339 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e418106de5eee25898cf06573a288a3d8403d105f595963f6cccf56dfbcab59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86588c97d6-9gb7l" Jul 7 02:15:58.860388 kubelet[4310]: E0707 02:15:58.860360 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e418106de5eee25898cf06573a288a3d8403d105f595963f6cccf56dfbcab59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86588c97d6-9gb7l" Jul 7 02:15:58.860420 kubelet[4310]: E0707 02:15:58.860397 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86588c97d6-9gb7l_calico-system(76d3f3c8-064e-4919-b78f-8bb726ce492a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86588c97d6-9gb7l_calico-system(76d3f3c8-064e-4919-b78f-8bb726ce492a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e418106de5eee25898cf06573a288a3d8403d105f595963f6cccf56dfbcab59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86588c97d6-9gb7l" podUID="76d3f3c8-064e-4919-b78f-8bb726ce492a" Jul 7 02:15:58.875337 containerd[2803]: time="2025-07-07T02:15:58.875299831Z" level=error msg="Failed to destroy network for sandbox \"b16194b9317dd83deae60e4f8957b7a1492924c439bc4dab94cfce2e22963b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.875747 containerd[2803]: time="2025-07-07T02:15:58.875720596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f479b6b87-wdp9r,Uid:2debacf4-a52f-4082-9706-631c367fb96a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b16194b9317dd83deae60e4f8957b7a1492924c439bc4dab94cfce2e22963b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.875926 kubelet[4310]: E0707 02:15:58.875881 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b16194b9317dd83deae60e4f8957b7a1492924c439bc4dab94cfce2e22963b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.875960 kubelet[4310]: E0707 02:15:58.875948 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b16194b9317dd83deae60e4f8957b7a1492924c439bc4dab94cfce2e22963b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f479b6b87-wdp9r" Jul 7 02:15:58.875992 kubelet[4310]: E0707 02:15:58.875965 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b16194b9317dd83deae60e4f8957b7a1492924c439bc4dab94cfce2e22963b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f479b6b87-wdp9r" Jul 7 02:15:58.876026 kubelet[4310]: E0707 02:15:58.876004 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f479b6b87-wdp9r_calico-system(2debacf4-a52f-4082-9706-631c367fb96a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f479b6b87-wdp9r_calico-system(2debacf4-a52f-4082-9706-631c367fb96a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b16194b9317dd83deae60e4f8957b7a1492924c439bc4dab94cfce2e22963b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f479b6b87-wdp9r" podUID="2debacf4-a52f-4082-9706-631c367fb96a" Jul 7 02:15:58.876199 containerd[2803]: time="2025-07-07T02:15:58.876164080Z" level=error msg="Failed to destroy network for sandbox \"26ab6da8836b094d29ca2ea64a6457383c3b20d89985149a0bae65e8ef24add0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.876526 containerd[2803]: time="2025-07-07T02:15:58.876500493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rltds,Uid:46223083-50d4-419a-b1e4-3cf8215abc31,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab6da8836b094d29ca2ea64a6457383c3b20d89985149a0bae65e8ef24add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.876643 kubelet[4310]: E0707 02:15:58.876616 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab6da8836b094d29ca2ea64a6457383c3b20d89985149a0bae65e8ef24add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:58.876672 kubelet[4310]: E0707 02:15:58.876659 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab6da8836b094d29ca2ea64a6457383c3b20d89985149a0bae65e8ef24add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-rltds" Jul 7 02:15:58.876715 kubelet[4310]: E0707 02:15:58.876677 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab6da8836b094d29ca2ea64a6457383c3b20d89985149a0bae65e8ef24add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-rltds" Jul 7 02:15:58.876740 kubelet[4310]: E0707 02:15:58.876716 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-rltds_calico-system(46223083-50d4-419a-b1e4-3cf8215abc31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-rltds_calico-system(46223083-50d4-419a-b1e4-3cf8215abc31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26ab6da8836b094d29ca2ea64a6457383c3b20d89985149a0bae65e8ef24add0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-rltds" podUID="46223083-50d4-419a-b1e4-3cf8215abc31" Jul 7 02:15:58.933014 systemd[1]: run-netns-cni\x2ddf20f238\x2d18d3\x2d158a\x2d0ab9\x2dafde4b263e42.mount: Deactivated successfully. Jul 7 02:15:59.708823 containerd[2803]: time="2025-07-07T02:15:59.708791914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7hwxq,Uid:67f18952-866b-40bb-a879-3ff8e070364a,Namespace:kube-system,Attempt:0,}" Jul 7 02:15:59.717591 containerd[2803]: time="2025-07-07T02:15:59.717557476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-527gm,Uid:8402c195-fbc9-4582-be73-2715d432c85d,Namespace:kube-system,Attempt:0,}" Jul 7 02:15:59.721734 kubelet[4310]: E0707 02:15:59.721718 4310 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.721905 kubelet[4310]: E0707 02:15:59.721743 4310 projected.go:194] Error preparing data for projected volume kube-api-access-r8wdv for pod calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.721905 kubelet[4310]: E0707 02:15:59.721746 4310 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.721905 kubelet[4310]: E0707 02:15:59.721773 4310 projected.go:194] Error preparing data for projected volume kube-api-access-jcpsx for pod calico-apiserver/calico-apiserver-674744f469-4r7kc: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.721905 kubelet[4310]: E0707 02:15:59.721789 4310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a28c1297-214b-4a2e-af80-83a741733d7e-kube-api-access-r8wdv podName:a28c1297-214b-4a2e-af80-83a741733d7e nodeName:}" failed. No retries permitted until 2025-07-07 02:16:00.22177207 +0000 UTC m=+26.736417165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r8wdv" (UniqueName: "kubernetes.io/projected/a28c1297-214b-4a2e-af80-83a741733d7e-kube-api-access-r8wdv") pod "calico-apiserver-fc4f5fbcf-wqtv7" (UID: "a28c1297-214b-4a2e-af80-83a741733d7e") : failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.721905 kubelet[4310]: E0707 02:15:59.721723 4310 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.721905 kubelet[4310]: E0707 02:15:59.721811 4310 projected.go:194] Error preparing data for projected volume kube-api-access-56mv4 for pod calico-apiserver/calico-apiserver-674744f469-z27j7: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.722073 kubelet[4310]: E0707 02:15:59.721817 4310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-kube-api-access-jcpsx podName:19e3fd3f-6889-408b-8d4f-0c6f70c06d65 nodeName:}" failed. No retries permitted until 2025-07-07 02:16:00.221802068 +0000 UTC m=+26.736447203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jcpsx" (UniqueName: "kubernetes.io/projected/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-kube-api-access-jcpsx") pod "calico-apiserver-674744f469-4r7kc" (UID: "19e3fd3f-6889-408b-8d4f-0c6f70c06d65") : failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.722073 kubelet[4310]: E0707 02:15:59.721848 4310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-kube-api-access-56mv4 podName:1bbf5233-af42-4f01-bf6f-df8d8e280cbf nodeName:}" failed. No retries permitted until 2025-07-07 02:16:00.221836465 +0000 UTC m=+26.736481600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-56mv4" (UniqueName: "kubernetes.io/projected/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-kube-api-access-56mv4") pod "calico-apiserver-674744f469-z27j7" (UID: "1bbf5233-af42-4f01-bf6f-df8d8e280cbf") : failed to sync configmap cache: timed out waiting for the condition Jul 7 02:15:59.748715 containerd[2803]: time="2025-07-07T02:15:59.748679190Z" level=error msg="Failed to destroy network for sandbox \"591333541b12c6df7afd80cec82c97192e9bcb02e4cee41cfa29be69355f3804\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:59.749087 containerd[2803]: time="2025-07-07T02:15:59.749061800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7hwxq,Uid:67f18952-866b-40bb-a879-3ff8e070364a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"591333541b12c6df7afd80cec82c97192e9bcb02e4cee41cfa29be69355f3804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:59.749251 kubelet[4310]: E0707 02:15:59.749222 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591333541b12c6df7afd80cec82c97192e9bcb02e4cee41cfa29be69355f3804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:59.749289 kubelet[4310]: E0707 02:15:59.749276 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591333541b12c6df7afd80cec82c97192e9bcb02e4cee41cfa29be69355f3804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7hwxq" Jul 7 02:15:59.749315 kubelet[4310]: E0707 02:15:59.749295 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591333541b12c6df7afd80cec82c97192e9bcb02e4cee41cfa29be69355f3804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7hwxq" Jul 7 02:15:59.749351 kubelet[4310]: E0707 02:15:59.749331 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7hwxq_kube-system(67f18952-866b-40bb-a879-3ff8e070364a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7hwxq_kube-system(67f18952-866b-40bb-a879-3ff8e070364a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"591333541b12c6df7afd80cec82c97192e9bcb02e4cee41cfa29be69355f3804\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7hwxq" podUID="67f18952-866b-40bb-a879-3ff8e070364a" Jul 7 02:15:59.750308 systemd[1]: run-netns-cni\x2d98c36776\x2d54e6\x2d9db4\x2d8e9a\x2da43dfce81c25.mount: Deactivated successfully. Jul 7 02:15:59.757716 containerd[2803]: time="2025-07-07T02:15:59.757689213Z" level=error msg="Failed to destroy network for sandbox \"7d5f9cfc5ed3c430f9927a5a33c20596a495f9eb8397a099d1921265cd45b50e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:59.758038 containerd[2803]: time="2025-07-07T02:15:59.758015428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-527gm,Uid:8402c195-fbc9-4582-be73-2715d432c85d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5f9cfc5ed3c430f9927a5a33c20596a495f9eb8397a099d1921265cd45b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:59.758184 kubelet[4310]: E0707 02:15:59.758159 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5f9cfc5ed3c430f9927a5a33c20596a495f9eb8397a099d1921265cd45b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:15:59.758220 kubelet[4310]: E0707 02:15:59.758204 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5f9cfc5ed3c430f9927a5a33c20596a495f9eb8397a099d1921265cd45b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-527gm" Jul 7 02:15:59.758241 kubelet[4310]: E0707 02:15:59.758226 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5f9cfc5ed3c430f9927a5a33c20596a495f9eb8397a099d1921265cd45b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-527gm" Jul 7 02:15:59.758276 kubelet[4310]: E0707 02:15:59.758259 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-527gm_kube-system(8402c195-fbc9-4582-be73-2715d432c85d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-527gm_kube-system(8402c195-fbc9-4582-be73-2715d432c85d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d5f9cfc5ed3c430f9927a5a33c20596a495f9eb8397a099d1921265cd45b50e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-527gm" podUID="8402c195-fbc9-4582-be73-2715d432c85d" Jul 7 02:15:59.759179 systemd[1]: run-netns-cni\x2de93e3029\x2db90f\x2d0733\x2d7e53\x2ddee07f46ac3f.mount: Deactivated successfully. Jul 7 02:16:00.313381 containerd[2803]: time="2025-07-07T02:16:00.313348377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4f5fbcf-wqtv7,Uid:a28c1297-214b-4a2e-af80-83a741733d7e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:00.326005 containerd[2803]: time="2025-07-07T02:16:00.325967490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-4r7kc,Uid:19e3fd3f-6889-408b-8d4f-0c6f70c06d65,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:00.329519 containerd[2803]: time="2025-07-07T02:16:00.329493551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-z27j7,Uid:1bbf5233-af42-4f01-bf6f-df8d8e280cbf,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:00.353741 containerd[2803]: time="2025-07-07T02:16:00.353707292Z" level=error msg="Failed to destroy network for sandbox \"8c13d007a71187ec48f0e00d0b414795a17c0e75df7d900e81d3a4c789f13b54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.354091 containerd[2803]: time="2025-07-07T02:16:00.354064186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4f5fbcf-wqtv7,Uid:a28c1297-214b-4a2e-af80-83a741733d7e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c13d007a71187ec48f0e00d0b414795a17c0e75df7d900e81d3a4c789f13b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.354264 kubelet[4310]: E0707 02:16:00.354233 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c13d007a71187ec48f0e00d0b414795a17c0e75df7d900e81d3a4c789f13b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.354302 kubelet[4310]: E0707 02:16:00.354287 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c13d007a71187ec48f0e00d0b414795a17c0e75df7d900e81d3a4c789f13b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7" Jul 7 02:16:00.354326 kubelet[4310]: E0707 02:16:00.354305 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c13d007a71187ec48f0e00d0b414795a17c0e75df7d900e81d3a4c789f13b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7" Jul 7 02:16:00.354371 kubelet[4310]: E0707 02:16:00.354348 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fc4f5fbcf-wqtv7_calico-apiserver(a28c1297-214b-4a2e-af80-83a741733d7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fc4f5fbcf-wqtv7_calico-apiserver(a28c1297-214b-4a2e-af80-83a741733d7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c13d007a71187ec48f0e00d0b414795a17c0e75df7d900e81d3a4c789f13b54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7" podUID="a28c1297-214b-4a2e-af80-83a741733d7e" Jul 7 02:16:00.366475 containerd[2803]: time="2025-07-07T02:16:00.366435357Z" level=error msg="Failed to destroy network for sandbox \"dc57a990eeba7fb804f300e8f48d4ee6f6216903dccab9a41d676ba72c189462\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.366846 containerd[2803]: time="2025-07-07T02:16:00.366817529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-4r7kc,Uid:19e3fd3f-6889-408b-8d4f-0c6f70c06d65,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc57a990eeba7fb804f300e8f48d4ee6f6216903dccab9a41d676ba72c189462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.367013 kubelet[4310]: E0707 02:16:00.366978 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc57a990eeba7fb804f300e8f48d4ee6f6216903dccab9a41d676ba72c189462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.367045 kubelet[4310]: E0707 02:16:00.367036 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc57a990eeba7fb804f300e8f48d4ee6f6216903dccab9a41d676ba72c189462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674744f469-4r7kc" Jul 7 02:16:00.367075 kubelet[4310]: E0707 02:16:00.367054 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc57a990eeba7fb804f300e8f48d4ee6f6216903dccab9a41d676ba72c189462\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674744f469-4r7kc" Jul 7 02:16:00.367124 kubelet[4310]: E0707 02:16:00.367088 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674744f469-4r7kc_calico-apiserver(19e3fd3f-6889-408b-8d4f-0c6f70c06d65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674744f469-4r7kc_calico-apiserver(19e3fd3f-6889-408b-8d4f-0c6f70c06d65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc57a990eeba7fb804f300e8f48d4ee6f6216903dccab9a41d676ba72c189462\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674744f469-4r7kc" podUID="19e3fd3f-6889-408b-8d4f-0c6f70c06d65" Jul 7 02:16:00.369099 containerd[2803]: time="2025-07-07T02:16:00.369073603Z" level=error msg="Failed to destroy network for sandbox \"7b546a5533a243041c066dc341a483c1cfd0eedd982d160ed782ce1baf4b7d60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.369404 containerd[2803]: time="2025-07-07T02:16:00.369379541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-z27j7,Uid:1bbf5233-af42-4f01-bf6f-df8d8e280cbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b546a5533a243041c066dc341a483c1cfd0eedd982d160ed782ce1baf4b7d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.369527 kubelet[4310]: E0707 02:16:00.369506 4310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b546a5533a243041c066dc341a483c1cfd0eedd982d160ed782ce1baf4b7d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:16:00.369550 kubelet[4310]: E0707 02:16:00.369538 4310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b546a5533a243041c066dc341a483c1cfd0eedd982d160ed782ce1baf4b7d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674744f469-z27j7" Jul 7 02:16:00.369571 kubelet[4310]: E0707 02:16:00.369554 4310 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b546a5533a243041c066dc341a483c1cfd0eedd982d160ed782ce1baf4b7d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674744f469-z27j7" Jul 7 02:16:00.369599 kubelet[4310]: E0707 02:16:00.369584 4310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674744f469-z27j7_calico-apiserver(1bbf5233-af42-4f01-bf6f-df8d8e280cbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674744f469-z27j7_calico-apiserver(1bbf5233-af42-4f01-bf6f-df8d8e280cbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b546a5533a243041c066dc341a483c1cfd0eedd982d160ed782ce1baf4b7d60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674744f469-z27j7" podUID="1bbf5233-af42-4f01-bf6f-df8d8e280cbf" Jul 7 02:16:01.117239 systemd[1]: run-netns-cni\x2d9ec2bfd9\x2dc988\x2debcc\x2da1c2\x2dfc028549f08e.mount: Deactivated successfully. Jul 7 02:16:01.117319 systemd[1]: run-netns-cni\x2db1707a46\x2ded31\x2dcc87\x2dd5d3\x2d62651189c1cd.mount: Deactivated successfully. Jul 7 02:16:01.339323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865713034.mount: Deactivated successfully. Jul 7 02:16:01.356987 containerd[2803]: time="2025-07-07T02:16:01.356906441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 02:16:01.356987 containerd[2803]: time="2025-07-07T02:16:01.356934920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:01.357605 containerd[2803]: time="2025-07-07T02:16:01.357581594Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:01.358884 containerd[2803]: time="2025-07-07T02:16:01.358864945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:01.359410 containerd[2803]: time="2025-07-07T02:16:01.359384068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 2.750661372s" Jul 7 02:16:01.359439 containerd[2803]: time="2025-07-07T02:16:01.359416226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 02:16:01.364661 containerd[2803]: time="2025-07-07T02:16:01.364640341Z" level=info msg="CreateContainer within sandbox \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 02:16:01.373081 containerd[2803]: time="2025-07-07T02:16:01.373027835Z" level=info msg="Container 762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:01.378218 containerd[2803]: time="2025-07-07T02:16:01.378182555Z" level=info msg="CreateContainer within sandbox \"645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\"" Jul 7 02:16:01.378629 containerd[2803]: time="2025-07-07T02:16:01.378575648Z" level=info msg="StartContainer for \"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\"" Jul 7 02:16:01.379922 containerd[2803]: time="2025-07-07T02:16:01.379899635Z" level=info msg="connecting to shim 762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05" address="unix:///run/containerd/s/5f7efaa5f805765697b7569a09b49ac0039e5a3f1f5ddda08cc56c5eea8806f7" protocol=ttrpc version=3 Jul 7 02:16:01.407850 systemd[1]: Started cri-containerd-762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05.scope - libcontainer container 762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05. Jul 7 02:16:01.437760 containerd[2803]: time="2025-07-07T02:16:01.437727716Z" level=info msg="StartContainer for \"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" returns successfully" Jul 7 02:16:01.567106 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 02:16:01.567202 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 02:16:01.626931 kubelet[4310]: I0707 02:16:01.626819 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cs4lm" podStartSLOduration=1.392459532 podStartE2EDuration="7.626800349s" podCreationTimestamp="2025-07-07 02:15:54 +0000 UTC" firstStartedPulling="2025-07-07 02:15:55.12563153 +0000 UTC m=+21.640276625" lastFinishedPulling="2025-07-07 02:16:01.359972307 +0000 UTC m=+27.874617442" observedRunningTime="2025-07-07 02:16:01.626400657 +0000 UTC m=+28.141045792" watchObservedRunningTime="2025-07-07 02:16:01.626800349 +0000 UTC m=+28.141445484" Jul 7 02:16:01.732107 kubelet[4310]: I0707 02:16:01.732061 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wbbr\" (UniqueName: \"kubernetes.io/projected/76d3f3c8-064e-4919-b78f-8bb726ce492a-kube-api-access-6wbbr\") pod \"76d3f3c8-064e-4919-b78f-8bb726ce492a\" (UID: \"76d3f3c8-064e-4919-b78f-8bb726ce492a\") " Jul 7 02:16:01.732107 kubelet[4310]: I0707 02:16:01.732119 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-backend-key-pair\") pod \"76d3f3c8-064e-4919-b78f-8bb726ce492a\" (UID: \"76d3f3c8-064e-4919-b78f-8bb726ce492a\") " Jul 7 02:16:01.732305 kubelet[4310]: I0707 02:16:01.732137 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-ca-bundle\") pod \"76d3f3c8-064e-4919-b78f-8bb726ce492a\" (UID: \"76d3f3c8-064e-4919-b78f-8bb726ce492a\") " Jul 7 02:16:01.732599 kubelet[4310]: I0707 02:16:01.732569 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "76d3f3c8-064e-4919-b78f-8bb726ce492a" (UID: "76d3f3c8-064e-4919-b78f-8bb726ce492a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 02:16:01.734283 kubelet[4310]: I0707 02:16:01.734259 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d3f3c8-064e-4919-b78f-8bb726ce492a-kube-api-access-6wbbr" (OuterVolumeSpecName: "kube-api-access-6wbbr") pod "76d3f3c8-064e-4919-b78f-8bb726ce492a" (UID: "76d3f3c8-064e-4919-b78f-8bb726ce492a"). InnerVolumeSpecName "kube-api-access-6wbbr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 02:16:01.734325 kubelet[4310]: I0707 02:16:01.734301 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "76d3f3c8-064e-4919-b78f-8bb726ce492a" (UID: "76d3f3c8-064e-4919-b78f-8bb726ce492a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 02:16:01.832674 kubelet[4310]: I0707 02:16:01.832649 4310 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-backend-key-pair\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:01.832674 kubelet[4310]: I0707 02:16:01.832669 4310 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d3f3c8-064e-4919-b78f-8bb726ce492a-whisker-ca-bundle\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:01.832783 kubelet[4310]: I0707 02:16:01.832679 4310 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wbbr\" (UniqueName: \"kubernetes.io/projected/76d3f3c8-064e-4919-b78f-8bb726ce492a-kube-api-access-6wbbr\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:02.118011 systemd[1]: var-lib-kubelet-pods-76d3f3c8\x2d064e\x2d4919\x2db78f\x2d8bb726ce492a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6wbbr.mount: Deactivated successfully. Jul 7 02:16:02.118093 systemd[1]: var-lib-kubelet-pods-76d3f3c8\x2d064e\x2d4919\x2db78f\x2d8bb726ce492a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 02:16:02.616672 kubelet[4310]: I0707 02:16:02.616649 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:02.620131 systemd[1]: Removed slice kubepods-besteffort-pod76d3f3c8_064e_4919_b78f_8bb726ce492a.slice - libcontainer container kubepods-besteffort-pod76d3f3c8_064e_4919_b78f_8bb726ce492a.slice. Jul 7 02:16:02.646485 systemd[1]: Created slice kubepods-besteffort-pod85102a9d_e570_4a30_9a8f_e1e835d0ac4d.slice - libcontainer container kubepods-besteffort-pod85102a9d_e570_4a30_9a8f_e1e835d0ac4d.slice. Jul 7 02:16:02.737151 kubelet[4310]: I0707 02:16:02.737117 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6zp\" (UniqueName: \"kubernetes.io/projected/85102a9d-e570-4a30-9a8f-e1e835d0ac4d-kube-api-access-vj6zp\") pod \"whisker-69bfc559f8-m494x\" (UID: \"85102a9d-e570-4a30-9a8f-e1e835d0ac4d\") " pod="calico-system/whisker-69bfc559f8-m494x" Jul 7 02:16:02.737349 kubelet[4310]: I0707 02:16:02.737215 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85102a9d-e570-4a30-9a8f-e1e835d0ac4d-whisker-backend-key-pair\") pod \"whisker-69bfc559f8-m494x\" (UID: \"85102a9d-e570-4a30-9a8f-e1e835d0ac4d\") " pod="calico-system/whisker-69bfc559f8-m494x" Jul 7 02:16:02.737349 kubelet[4310]: I0707 02:16:02.737286 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85102a9d-e570-4a30-9a8f-e1e835d0ac4d-whisker-ca-bundle\") pod \"whisker-69bfc559f8-m494x\" (UID: \"85102a9d-e570-4a30-9a8f-e1e835d0ac4d\") " pod="calico-system/whisker-69bfc559f8-m494x" Jul 7 02:16:02.948741 containerd[2803]: time="2025-07-07T02:16:02.948624704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69bfc559f8-m494x,Uid:85102a9d-e570-4a30-9a8f-e1e835d0ac4d,Namespace:calico-system,Attempt:0,}" Jul 7 02:16:03.069141 systemd-networkd[2713]: cali935577fc2f7: Link UP Jul 7 02:16:03.069365 systemd-networkd[2713]: cali935577fc2f7: Gained carrier Jul 7 02:16:03.076525 containerd[2803]: 2025-07-07 02:16:02.983 [INFO][6242] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:03.076525 containerd[2803]: 2025-07-07 02:16:02.998 [INFO][6242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0 whisker-69bfc559f8- calico-system 85102a9d-e570-4a30-9a8f-e1e835d0ac4d 859 0 2025-07-07 02:16:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69bfc559f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b whisker-69bfc559f8-m494x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali935577fc2f7 [] [] }} ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-" Jul 7 02:16:03.076525 containerd[2803]: 2025-07-07 02:16:02.998 [INFO][6242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.076525 containerd[2803]: 2025-07-07 02:16:03.035 [INFO][6273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" HandleID="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Workload="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.035 [INFO][6273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" HandleID="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Workload="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006b6780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"whisker-69bfc559f8-m494x", "timestamp":"2025-07-07 02:16:03.035759424 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.035 [INFO][6273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.035 [INFO][6273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.036 [INFO][6273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.044 [INFO][6273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.047 [INFO][6273] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.051 [INFO][6273] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.053 [INFO][6273] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076712 containerd[2803]: 2025-07-07 02:16:03.054 [INFO][6273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.054 [INFO][6273] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.055 [INFO][6273] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67 Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.058 [INFO][6273] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.061 [INFO][6273] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.129/26] block=192.168.33.128/26 handle="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.061 [INFO][6273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.129/26] handle="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.061 [INFO][6273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:03.076878 containerd[2803]: 2025-07-07 02:16:03.061 [INFO][6273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.129/26] IPv6=[] ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" HandleID="k8s-pod-network.d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Workload="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.076995 containerd[2803]: 2025-07-07 02:16:03.063 [INFO][6242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0", GenerateName:"whisker-69bfc559f8-", Namespace:"calico-system", SelfLink:"", UID:"85102a9d-e570-4a30-9a8f-e1e835d0ac4d", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69bfc559f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"whisker-69bfc559f8-m494x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali935577fc2f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:03.076995 containerd[2803]: 2025-07-07 02:16:03.063 [INFO][6242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.129/32] ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.077060 containerd[2803]: 2025-07-07 02:16:03.064 [INFO][6242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali935577fc2f7 ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.077060 containerd[2803]: 2025-07-07 02:16:03.069 [INFO][6242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.077095 containerd[2803]: 2025-07-07 02:16:03.069 [INFO][6242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0", GenerateName:"whisker-69bfc559f8-", Namespace:"calico-system", SelfLink:"", UID:"85102a9d-e570-4a30-9a8f-e1e835d0ac4d", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 16, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69bfc559f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67", Pod:"whisker-69bfc559f8-m494x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali935577fc2f7", MAC:"be:aa:1e:32:37:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:03.077138 containerd[2803]: 2025-07-07 02:16:03.074 [INFO][6242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" Namespace="calico-system" Pod="whisker-69bfc559f8-m494x" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-whisker--69bfc559f8--m494x-eth0" Jul 7 02:16:03.086332 containerd[2803]: time="2025-07-07T02:16:03.086300825Z" level=info msg="connecting to shim d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67" address="unix:///run/containerd/s/7875c4f441d73e56a79f9c201546f954349d0ef6ea4cac26706ead2c893d1a05" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:03.114856 systemd[1]: Started cri-containerd-d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67.scope - libcontainer container d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67. Jul 7 02:16:03.141264 containerd[2803]: time="2025-07-07T02:16:03.141146514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69bfc559f8-m494x,Uid:85102a9d-e570-4a30-9a8f-e1e835d0ac4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67\"" Jul 7 02:16:03.143131 containerd[2803]: time="2025-07-07T02:16:03.143070273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 02:16:03.559338 kubelet[4310]: I0707 02:16:03.559308 4310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d3f3c8-064e-4919-b78f-8bb726ce492a" path="/var/lib/kubelet/pods/76d3f3c8-064e-4919-b78f-8bb726ce492a/volumes" Jul 7 02:16:03.590351 containerd[2803]: time="2025-07-07T02:16:03.590321606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:03.590390 containerd[2803]: time="2025-07-07T02:16:03.590366484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 02:16:03.591048 containerd[2803]: time="2025-07-07T02:16:03.591023882Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:03.592668 containerd[2803]: time="2025-07-07T02:16:03.592641140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:03.593307 containerd[2803]: time="2025-07-07T02:16:03.593287059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 450.14791ms" Jul 7 02:16:03.593335 containerd[2803]: time="2025-07-07T02:16:03.593309817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 02:16:03.594672 containerd[2803]: time="2025-07-07T02:16:03.594651652Z" level=info msg="CreateContainer within sandbox \"d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 02:16:03.597670 containerd[2803]: time="2025-07-07T02:16:03.597639943Z" level=info msg="Container 874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:03.600938 containerd[2803]: time="2025-07-07T02:16:03.600916256Z" level=info msg="CreateContainer within sandbox \"d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47\"" Jul 7 02:16:03.601262 containerd[2803]: time="2025-07-07T02:16:03.601238956Z" level=info msg="StartContainer for \"874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47\"" Jul 7 02:16:03.603059 containerd[2803]: time="2025-07-07T02:16:03.603020723Z" level=info msg="connecting to shim 874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47" address="unix:///run/containerd/s/7875c4f441d73e56a79f9c201546f954349d0ef6ea4cac26706ead2c893d1a05" protocol=ttrpc version=3 Jul 7 02:16:03.631798 systemd[1]: Started cri-containerd-874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47.scope - libcontainer container 874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47. Jul 7 02:16:03.659808 containerd[2803]: time="2025-07-07T02:16:03.659781130Z" level=info msg="StartContainer for \"874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47\" returns successfully" Jul 7 02:16:03.660631 containerd[2803]: time="2025-07-07T02:16:03.660609158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 02:16:04.212802 systemd-networkd[2713]: cali935577fc2f7: Gained IPv6LL Jul 7 02:16:04.360930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931723273.mount: Deactivated successfully. Jul 7 02:16:04.363753 containerd[2803]: time="2025-07-07T02:16:04.363716380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:04.364104 containerd[2803]: time="2025-07-07T02:16:04.363743858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 02:16:04.365866 containerd[2803]: time="2025-07-07T02:16:04.365814853Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:04.366557 containerd[2803]: time="2025-07-07T02:16:04.366500092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 705.860775ms" Jul 7 02:16:04.366557 containerd[2803]: time="2025-07-07T02:16:04.366532570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 02:16:04.366982 containerd[2803]: time="2025-07-07T02:16:04.366951184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:04.368072 containerd[2803]: time="2025-07-07T02:16:04.368048398Z" level=info msg="CreateContainer within sandbox \"d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 02:16:04.371705 containerd[2803]: time="2025-07-07T02:16:04.371664420Z" level=info msg="Container 6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:04.377043 containerd[2803]: time="2025-07-07T02:16:04.376964701Z" level=info msg="CreateContainer within sandbox \"d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e\"" Jul 7 02:16:04.377372 containerd[2803]: time="2025-07-07T02:16:04.377344158Z" level=info msg="StartContainer for \"6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e\"" Jul 7 02:16:04.378315 containerd[2803]: time="2025-07-07T02:16:04.378291940Z" level=info msg="connecting to shim 6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e" address="unix:///run/containerd/s/7875c4f441d73e56a79f9c201546f954349d0ef6ea4cac26706ead2c893d1a05" protocol=ttrpc version=3 Jul 7 02:16:04.406801 systemd[1]: Started cri-containerd-6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e.scope - libcontainer container 6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e. Jul 7 02:16:04.437044 containerd[2803]: time="2025-07-07T02:16:04.437016679Z" level=info msg="StartContainer for \"6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e\" returns successfully" Jul 7 02:16:04.631382 kubelet[4310]: I0707 02:16:04.631334 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69bfc559f8-m494x" podStartSLOduration=1.406982837 podStartE2EDuration="2.63131892s" podCreationTimestamp="2025-07-07 02:16:02 +0000 UTC" firstStartedPulling="2025-07-07 02:16:03.142732014 +0000 UTC m=+29.657377149" lastFinishedPulling="2025-07-07 02:16:04.367068097 +0000 UTC m=+30.881713232" observedRunningTime="2025-07-07 02:16:04.630833989 +0000 UTC m=+31.145479124" watchObservedRunningTime="2025-07-07 02:16:04.63131892 +0000 UTC m=+31.145964055" Jul 7 02:16:06.455583 systemd[1]: Started sshd@7-147.28.151.230:22-180.76.151.217:42494.service - OpenSSH per-connection server daemon (180.76.151.217:42494). Jul 7 02:16:07.907906 sshd[6703]: Received disconnect from 180.76.151.217 port 42494:11: Bye Bye [preauth] Jul 7 02:16:07.907906 sshd[6703]: Disconnected from authenticating user root 180.76.151.217 port 42494 [preauth] Jul 7 02:16:07.909950 systemd[1]: sshd@7-147.28.151.230:22-180.76.151.217:42494.service: Deactivated successfully. Jul 7 02:16:08.229185 kubelet[4310]: I0707 02:16:08.229084 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:08.298217 containerd[2803]: time="2025-07-07T02:16:08.298160872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"3e6787212d42ffd58472ea25a82c7fcc29ede3a2c19752a7e9a77859e7bd59bd\" pid:6831 exit_status:1 exited_at:{seconds:1751854568 nanos:297821129}" Jul 7 02:16:08.366474 containerd[2803]: time="2025-07-07T02:16:08.366435288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"afb8bd00c3bb7639edebf8d8fa0dd4c6c41239fc99699674bf995ea26e8bca72\" pid:6866 exit_status:1 exited_at:{seconds:1751854568 nanos:366268536}" Jul 7 02:16:09.559500 containerd[2803]: time="2025-07-07T02:16:09.559455886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rltds,Uid:46223083-50d4-419a-b1e4-3cf8215abc31,Namespace:calico-system,Attempt:0,}" Jul 7 02:16:09.559916 containerd[2803]: time="2025-07-07T02:16:09.559570881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f479b6b87-wdp9r,Uid:2debacf4-a52f-4082-9706-631c367fb96a,Namespace:calico-system,Attempt:0,}" Jul 7 02:16:09.637865 systemd-networkd[2713]: cali757d8a36195: Link UP Jul 7 02:16:09.638619 systemd-networkd[2713]: cali757d8a36195: Gained carrier Jul 7 02:16:09.655753 containerd[2803]: 2025-07-07 02:16:09.576 [INFO][6951] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:09.655753 containerd[2803]: 2025-07-07 02:16:09.590 [INFO][6951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0 goldmane-768f4c5c69- calico-system 46223083-50d4-419a-b1e4-3cf8215abc31 798 0 2025-07-07 02:15:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b goldmane-768f4c5c69-rltds eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali757d8a36195 [] [] }} ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-" Jul 7 02:16:09.655753 containerd[2803]: 2025-07-07 02:16:09.590 [INFO][6951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.655753 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" HandleID="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Workload="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" HandleID="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Workload="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"goldmane-768f4c5c69-rltds", "timestamp":"2025-07-07 02:16:09.610255688 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.618 [INFO][7000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.621 [INFO][7000] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.624 [INFO][7000] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.626 [INFO][7000] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.655894 containerd[2803]: 2025-07-07 02:16:09.627 [INFO][7000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.627 [INFO][7000] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.628 [INFO][7000] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074 Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.631 [INFO][7000] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.634 [INFO][7000] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.130/26] block=192.168.33.128/26 handle="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.634 [INFO][7000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.130/26] handle="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.634 [INFO][7000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:09.656066 containerd[2803]: 2025-07-07 02:16:09.634 [INFO][7000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.130/26] IPv6=[] ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" HandleID="k8s-pod-network.b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Workload="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.656248 containerd[2803]: 2025-07-07 02:16:09.636 [INFO][6951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"46223083-50d4-419a-b1e4-3cf8215abc31", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"goldmane-768f4c5c69-rltds", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali757d8a36195", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:09.656248 containerd[2803]: 2025-07-07 02:16:09.636 [INFO][6951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.130/32] ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.656314 containerd[2803]: 2025-07-07 02:16:09.636 [INFO][6951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali757d8a36195 ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.656314 containerd[2803]: 2025-07-07 02:16:09.638 [INFO][6951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.656356 containerd[2803]: 2025-07-07 02:16:09.639 [INFO][6951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"46223083-50d4-419a-b1e4-3cf8215abc31", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074", Pod:"goldmane-768f4c5c69-rltds", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali757d8a36195", MAC:"36:59:10:27:92:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:09.656401 containerd[2803]: 2025-07-07 02:16:09.654 [INFO][6951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" Namespace="calico-system" Pod="goldmane-768f4c5c69-rltds" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-goldmane--768f4c5c69--rltds-eth0" Jul 7 02:16:09.665205 containerd[2803]: time="2025-07-07T02:16:09.665175492Z" level=info msg="connecting to shim b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074" address="unix:///run/containerd/s/ff9a4b5e52998ad169e810f63c9a0b0ed972f40af68764283464811247902690" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:09.697878 systemd[1]: Started cri-containerd-b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074.scope - libcontainer container b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074. Jul 7 02:16:09.733033 containerd[2803]: time="2025-07-07T02:16:09.732990517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rltds,Uid:46223083-50d4-419a-b1e4-3cf8215abc31,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074\"" Jul 7 02:16:09.734374 containerd[2803]: time="2025-07-07T02:16:09.734353411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 02:16:09.740929 systemd-networkd[2713]: caliac69341035e: Link UP Jul 7 02:16:09.741203 systemd-networkd[2713]: caliac69341035e: Gained carrier Jul 7 02:16:09.748648 containerd[2803]: 2025-07-07 02:16:09.577 [INFO][6953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:09.748648 containerd[2803]: 2025-07-07 02:16:09.591 [INFO][6953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0 calico-kube-controllers-f479b6b87- calico-system 2debacf4-a52f-4082-9706-631c367fb96a 794 0 2025-07-07 02:15:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f479b6b87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b calico-kube-controllers-f479b6b87-wdp9r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliac69341035e [] [] }} ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-" Jul 7 02:16:09.748648 containerd[2803]: 2025-07-07 02:16:09.591 [INFO][6953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.748648 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7002] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" HandleID="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7002] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" HandleID="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"calico-kube-controllers-f479b6b87-wdp9r", "timestamp":"2025-07-07 02:16:09.610373402 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.610 [INFO][7002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.634 [INFO][7002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.635 [INFO][7002] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.719 [INFO][7002] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.724 [INFO][7002] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.727 [INFO][7002] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.728 [INFO][7002] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.748901 containerd[2803]: 2025-07-07 02:16:09.729 [INFO][7002] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.729 [INFO][7002] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.730 [INFO][7002] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.733 [INFO][7002] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.736 [INFO][7002] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.131/26] block=192.168.33.128/26 handle="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.736 [INFO][7002] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.131/26] handle="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.737 [INFO][7002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:09.749089 containerd[2803]: 2025-07-07 02:16:09.737 [INFO][7002] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.131/26] IPv6=[] ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" HandleID="k8s-pod-network.79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.749218 containerd[2803]: 2025-07-07 02:16:09.738 [INFO][6953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0", GenerateName:"calico-kube-controllers-f479b6b87-", Namespace:"calico-system", SelfLink:"", UID:"2debacf4-a52f-4082-9706-631c367fb96a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f479b6b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"calico-kube-controllers-f479b6b87-wdp9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac69341035e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:09.749262 containerd[2803]: 2025-07-07 02:16:09.738 [INFO][6953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.131/32] ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.749262 containerd[2803]: 2025-07-07 02:16:09.738 [INFO][6953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac69341035e ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.749262 containerd[2803]: 2025-07-07 02:16:09.741 [INFO][6953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.749321 containerd[2803]: 2025-07-07 02:16:09.741 [INFO][6953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0", GenerateName:"calico-kube-controllers-f479b6b87-", Namespace:"calico-system", SelfLink:"", UID:"2debacf4-a52f-4082-9706-631c367fb96a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f479b6b87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc", Pod:"calico-kube-controllers-f479b6b87-wdp9r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac69341035e", MAC:"b2:ef:3e:a8:18:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:09.749364 containerd[2803]: 2025-07-07 02:16:09.746 [INFO][6953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" Namespace="calico-system" Pod="calico-kube-controllers-f479b6b87-wdp9r" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--kube--controllers--f479b6b87--wdp9r-eth0" Jul 7 02:16:09.758586 containerd[2803]: time="2025-07-07T02:16:09.758552250Z" level=info msg="connecting to shim 79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc" address="unix:///run/containerd/s/9ab49837643f174f61926788a4d38a42778ec0dfe5a942cb8797be60f54b2b10" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:09.784852 systemd[1]: Started cri-containerd-79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc.scope - libcontainer container 79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc. Jul 7 02:16:09.810762 containerd[2803]: time="2025-07-07T02:16:09.810699707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f479b6b87-wdp9r,Uid:2debacf4-a52f-4082-9706-631c367fb96a,Namespace:calico-system,Attempt:0,} returns sandbox id \"79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc\"" Jul 7 02:16:10.637460 containerd[2803]: time="2025-07-07T02:16:10.637402710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:10.637755 containerd[2803]: time="2025-07-07T02:16:10.637427708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 02:16:10.638250 containerd[2803]: time="2025-07-07T02:16:10.638197793Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:10.639985 containerd[2803]: time="2025-07-07T02:16:10.639955752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:10.640719 containerd[2803]: time="2025-07-07T02:16:10.640690038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 906.302749ms" Jul 7 02:16:10.640802 containerd[2803]: time="2025-07-07T02:16:10.640788474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 02:16:10.641465 containerd[2803]: time="2025-07-07T02:16:10.641447804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 02:16:10.642345 containerd[2803]: time="2025-07-07T02:16:10.642300364Z" level=info msg="CreateContainer within sandbox \"b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 02:16:10.648740 containerd[2803]: time="2025-07-07T02:16:10.648689871Z" level=info msg="Container f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:10.653183 containerd[2803]: time="2025-07-07T02:16:10.653106387Z" level=info msg="CreateContainer within sandbox \"b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\"" Jul 7 02:16:10.653473 containerd[2803]: time="2025-07-07T02:16:10.653454451Z" level=info msg="StartContainer for \"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\"" Jul 7 02:16:10.654490 containerd[2803]: time="2025-07-07T02:16:10.654465445Z" level=info msg="connecting to shim f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e" address="unix:///run/containerd/s/ff9a4b5e52998ad169e810f63c9a0b0ed972f40af68764283464811247902690" protocol=ttrpc version=3 Jul 7 02:16:10.682803 systemd[1]: Started cri-containerd-f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e.scope - libcontainer container f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e. Jul 7 02:16:10.711790 containerd[2803]: time="2025-07-07T02:16:10.711760491Z" level=info msg="StartContainer for \"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" returns successfully" Jul 7 02:16:11.253800 systemd-networkd[2713]: cali757d8a36195: Gained IPv6LL Jul 7 02:16:11.445771 systemd-networkd[2713]: caliac69341035e: Gained IPv6LL Jul 7 02:16:11.519494 containerd[2803]: time="2025-07-07T02:16:11.519423216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:11.519565 containerd[2803]: time="2025-07-07T02:16:11.519443135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 02:16:11.520022 containerd[2803]: time="2025-07-07T02:16:11.520001830Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:11.521556 containerd[2803]: time="2025-07-07T02:16:11.521529323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:11.522147 containerd[2803]: time="2025-07-07T02:16:11.522125617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 880.598417ms" Jul 7 02:16:11.522173 containerd[2803]: time="2025-07-07T02:16:11.522151095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 02:16:11.527355 containerd[2803]: time="2025-07-07T02:16:11.527326827Z" level=info msg="CreateContainer within sandbox \"79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 02:16:11.530830 containerd[2803]: time="2025-07-07T02:16:11.530807074Z" level=info msg="Container 029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:11.534297 containerd[2803]: time="2025-07-07T02:16:11.534270201Z" level=info msg="CreateContainer within sandbox \"79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\"" Jul 7 02:16:11.534615 containerd[2803]: time="2025-07-07T02:16:11.534591827Z" level=info msg="StartContainer for \"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\"" Jul 7 02:16:11.535564 containerd[2803]: time="2025-07-07T02:16:11.535538865Z" level=info msg="connecting to shim 029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a" address="unix:///run/containerd/s/9ab49837643f174f61926788a4d38a42778ec0dfe5a942cb8797be60f54b2b10" protocol=ttrpc version=3 Jul 7 02:16:11.557292 containerd[2803]: time="2025-07-07T02:16:11.557264787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-4r7kc,Uid:19e3fd3f-6889-408b-8d4f-0c6f70c06d65,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:11.557402 containerd[2803]: time="2025-07-07T02:16:11.557374223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-527gm,Uid:8402c195-fbc9-4582-be73-2715d432c85d,Namespace:kube-system,Attempt:0,}" Jul 7 02:16:11.564806 systemd[1]: Started cri-containerd-029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a.scope - libcontainer container 029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a. Jul 7 02:16:11.594030 containerd[2803]: time="2025-07-07T02:16:11.593994768Z" level=info msg="StartContainer for \"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" returns successfully" Jul 7 02:16:11.637394 systemd-networkd[2713]: cali833de639dae: Link UP Jul 7 02:16:11.637620 systemd-networkd[2713]: cali833de639dae: Gained carrier Jul 7 02:16:11.644853 containerd[2803]: 2025-07-07 02:16:11.578 [INFO][7353] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:11.644853 containerd[2803]: 2025-07-07 02:16:11.588 [INFO][7353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0 calico-apiserver-674744f469- calico-apiserver 19e3fd3f-6889-408b-8d4f-0c6f70c06d65 797 0 2025-07-07 02:15:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674744f469 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b calico-apiserver-674744f469-4r7kc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali833de639dae [] [] }} ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-" Jul 7 02:16:11.644853 containerd[2803]: 2025-07-07 02:16:11.588 [INFO][7353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.644853 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"calico-apiserver-674744f469-4r7kc", "timestamp":"2025-07-07 02:16:11.609583801 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.617 [INFO][7415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.620 [INFO][7415] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.623 [INFO][7415] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.624 [INFO][7415] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645226 containerd[2803]: 2025-07-07 02:16:11.626 [INFO][7415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.626 [INFO][7415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.627 [INFO][7415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.629 [INFO][7415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.633 [INFO][7415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.132/26] block=192.168.33.128/26 handle="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.633 [INFO][7415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.132/26] handle="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.633 [INFO][7415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:11.645410 containerd[2803]: 2025-07-07 02:16:11.633 [INFO][7415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.132/26] IPv6=[] ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.645532 containerd[2803]: 2025-07-07 02:16:11.634 [INFO][7353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0", GenerateName:"calico-apiserver-674744f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"19e3fd3f-6889-408b-8d4f-0c6f70c06d65", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674744f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"calico-apiserver-674744f469-4r7kc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali833de639dae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:11.645579 containerd[2803]: 2025-07-07 02:16:11.635 [INFO][7353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.132/32] ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.645579 containerd[2803]: 2025-07-07 02:16:11.635 [INFO][7353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali833de639dae ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.645579 containerd[2803]: 2025-07-07 02:16:11.637 [INFO][7353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.645637 containerd[2803]: 2025-07-07 02:16:11.637 [INFO][7353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0", GenerateName:"calico-apiserver-674744f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"19e3fd3f-6889-408b-8d4f-0c6f70c06d65", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674744f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83", Pod:"calico-apiserver-674744f469-4r7kc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali833de639dae", MAC:"fe:dc:bf:2f:6c:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:11.645698 containerd[2803]: 2025-07-07 02:16:11.643 [INFO][7353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-4r7kc" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:11.646585 kubelet[4310]: I0707 02:16:11.646533 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f479b6b87-wdp9r" podStartSLOduration=14.935422405 podStartE2EDuration="16.646516093s" podCreationTimestamp="2025-07-07 02:15:55 +0000 UTC" firstStartedPulling="2025-07-07 02:16:09.811587264 +0000 UTC m=+36.326232359" lastFinishedPulling="2025-07-07 02:16:11.522680912 +0000 UTC m=+38.037326047" observedRunningTime="2025-07-07 02:16:11.646232865 +0000 UTC m=+38.160878000" watchObservedRunningTime="2025-07-07 02:16:11.646516093 +0000 UTC m=+38.161161268" Jul 7 02:16:11.653688 kubelet[4310]: I0707 02:16:11.653648 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-rltds" podStartSLOduration=15.746232162 podStartE2EDuration="16.653635779s" podCreationTimestamp="2025-07-07 02:15:55 +0000 UTC" firstStartedPulling="2025-07-07 02:16:09.733924752 +0000 UTC m=+36.248569847" lastFinishedPulling="2025-07-07 02:16:10.641328329 +0000 UTC m=+37.155973464" observedRunningTime="2025-07-07 02:16:11.65338179 +0000 UTC m=+38.168026965" watchObservedRunningTime="2025-07-07 02:16:11.653635779 +0000 UTC m=+38.168280914" Jul 7 02:16:11.654866 containerd[2803]: time="2025-07-07T02:16:11.654833126Z" level=info msg="connecting to shim 049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" address="unix:///run/containerd/s/8b94ba6ac066d1fe6c4345d2b68a8009f022c836ed248b3cba6f855f1497daf9" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:11.668112 systemd[1]: Started cri-containerd-049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83.scope - libcontainer container 049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83. Jul 7 02:16:11.694305 containerd[2803]: time="2025-07-07T02:16:11.694272667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-4r7kc,Uid:19e3fd3f-6889-408b-8d4f-0c6f70c06d65,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\"" Jul 7 02:16:11.695403 containerd[2803]: time="2025-07-07T02:16:11.695384098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 02:16:11.716116 containerd[2803]: time="2025-07-07T02:16:11.716092585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"d4a4745f7996f8b08760769151aaebf481e6c013c581611eab5656451a0450c6\" pid:7497 exit_status:1 exited_at:{seconds:1751854571 nanos:715808798}" Jul 7 02:16:11.738755 systemd-networkd[2713]: calibd5195d3588: Link UP Jul 7 02:16:11.738964 systemd-networkd[2713]: calibd5195d3588: Gained carrier Jul 7 02:16:11.746247 containerd[2803]: 2025-07-07 02:16:11.577 [INFO][7351] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:11.746247 containerd[2803]: 2025-07-07 02:16:11.588 [INFO][7351] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0 coredns-668d6bf9bc- kube-system 8402c195-fbc9-4582-be73-2715d432c85d 795 0 2025-07-07 02:15:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b coredns-668d6bf9bc-527gm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd5195d3588 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-" Jul 7 02:16:11.746247 containerd[2803]: 2025-07-07 02:16:11.588 [INFO][7351] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.746247 containerd[2803]: 2025-07-07 02:16:11.608 [INFO][7417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" HandleID="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Workload="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" HandleID="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Workload="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40007be290), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"coredns-668d6bf9bc-527gm", "timestamp":"2025-07-07 02:16:11.608967108 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.609 [INFO][7417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.633 [INFO][7417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.633 [INFO][7417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.718 [INFO][7417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.722 [INFO][7417] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.725 [INFO][7417] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.726 [INFO][7417] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746410 containerd[2803]: 2025-07-07 02:16:11.728 [INFO][7417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.728 [INFO][7417] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.729 [INFO][7417] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3 Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.731 [INFO][7417] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.735 [INFO][7417] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.133/26] block=192.168.33.128/26 handle="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.735 [INFO][7417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.133/26] handle="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.735 [INFO][7417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:11.746588 containerd[2803]: 2025-07-07 02:16:11.735 [INFO][7417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.133/26] IPv6=[] ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" HandleID="k8s-pod-network.33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Workload="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.746736 containerd[2803]: 2025-07-07 02:16:11.737 [INFO][7351] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8402c195-fbc9-4582-be73-2715d432c85d", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"coredns-668d6bf9bc-527gm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd5195d3588", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:11.746736 containerd[2803]: 2025-07-07 02:16:11.737 [INFO][7351] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.133/32] ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.746736 containerd[2803]: 2025-07-07 02:16:11.737 [INFO][7351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd5195d3588 ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.746736 containerd[2803]: 2025-07-07 02:16:11.739 [INFO][7351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.746736 containerd[2803]: 2025-07-07 02:16:11.739 [INFO][7351] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8402c195-fbc9-4582-be73-2715d432c85d", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3", Pod:"coredns-668d6bf9bc-527gm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd5195d3588", MAC:"d6:b5:fd:34:46:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:11.746736 containerd[2803]: 2025-07-07 02:16:11.744 [INFO][7351] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-527gm" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--527gm-eth0" Jul 7 02:16:11.756628 containerd[2803]: time="2025-07-07T02:16:11.756600440Z" level=info msg="connecting to shim 33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3" address="unix:///run/containerd/s/53b145d942903218db328c450f39d5a150bdd544085d96213cd8576a808f6830" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:11.785872 systemd[1]: Started cri-containerd-33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3.scope - libcontainer container 33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3. Jul 7 02:16:11.819806 containerd[2803]: time="2025-07-07T02:16:11.819770015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-527gm,Uid:8402c195-fbc9-4582-be73-2715d432c85d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3\"" Jul 7 02:16:11.821678 containerd[2803]: time="2025-07-07T02:16:11.821652412Z" level=info msg="CreateContainer within sandbox \"33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 02:16:11.825963 containerd[2803]: time="2025-07-07T02:16:11.825934383Z" level=info msg="Container 116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:11.828440 containerd[2803]: time="2025-07-07T02:16:11.828412674Z" level=info msg="CreateContainer within sandbox \"33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86\"" Jul 7 02:16:11.828771 containerd[2803]: time="2025-07-07T02:16:11.828742659Z" level=info msg="StartContainer for \"116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86\"" Jul 7 02:16:11.829493 containerd[2803]: time="2025-07-07T02:16:11.829473547Z" level=info msg="connecting to shim 116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86" address="unix:///run/containerd/s/53b145d942903218db328c450f39d5a150bdd544085d96213cd8576a808f6830" protocol=ttrpc version=3 Jul 7 02:16:11.856860 systemd[1]: Started cri-containerd-116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86.scope - libcontainer container 116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86. Jul 7 02:16:11.877512 containerd[2803]: time="2025-07-07T02:16:11.877479351Z" level=info msg="StartContainer for \"116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86\" returns successfully" Jul 7 02:16:12.517324 containerd[2803]: time="2025-07-07T02:16:12.517284621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:12.517445 containerd[2803]: time="2025-07-07T02:16:12.517334139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 02:16:12.518033 containerd[2803]: time="2025-07-07T02:16:12.518017590Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:12.519573 containerd[2803]: time="2025-07-07T02:16:12.519552285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:12.520213 containerd[2803]: time="2025-07-07T02:16:12.520198537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 824.788ms" Jul 7 02:16:12.520281 containerd[2803]: time="2025-07-07T02:16:12.520220136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 02:16:12.521742 containerd[2803]: time="2025-07-07T02:16:12.521721913Z" level=info msg="CreateContainer within sandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:16:12.527476 containerd[2803]: time="2025-07-07T02:16:12.527444391Z" level=info msg="Container 1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:12.530692 containerd[2803]: time="2025-07-07T02:16:12.530662015Z" level=info msg="CreateContainer within sandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\"" Jul 7 02:16:12.531035 containerd[2803]: time="2025-07-07T02:16:12.531015400Z" level=info msg="StartContainer for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\"" Jul 7 02:16:12.531921 containerd[2803]: time="2025-07-07T02:16:12.531900562Z" level=info msg="connecting to shim 1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e" address="unix:///run/containerd/s/8b94ba6ac066d1fe6c4345d2b68a8009f022c836ed248b3cba6f855f1497daf9" protocol=ttrpc version=3 Jul 7 02:16:12.557811 containerd[2803]: time="2025-07-07T02:16:12.557776628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4f5fbcf-wqtv7,Uid:a28c1297-214b-4a2e-af80-83a741733d7e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:12.557897 containerd[2803]: time="2025-07-07T02:16:12.557869344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4gts,Uid:620039ac-83fb-48da-b837-6597b522186c,Namespace:calico-system,Attempt:0,}" Jul 7 02:16:12.563816 systemd[1]: Started cri-containerd-1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e.scope - libcontainer container 1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e. Jul 7 02:16:12.591749 containerd[2803]: time="2025-07-07T02:16:12.591715792Z" level=info msg="StartContainer for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" returns successfully" Jul 7 02:16:12.636232 systemd-networkd[2713]: cali1c21cc7a98e: Link UP Jul 7 02:16:12.636431 systemd-networkd[2713]: cali1c21cc7a98e: Gained carrier Jul 7 02:16:12.640167 kubelet[4310]: I0707 02:16:12.640150 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.575 [INFO][7762] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.586 [INFO][7762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0 calico-apiserver-fc4f5fbcf- calico-apiserver a28c1297-214b-4a2e-af80-83a741733d7e 790 0 2025-07-07 02:15:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc4f5fbcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b calico-apiserver-fc4f5fbcf-wqtv7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c21cc7a98e [] [] }} ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.586 [INFO][7762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" HandleID="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" HandleID="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000482780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"calico-apiserver-fc4f5fbcf-wqtv7", "timestamp":"2025-07-07 02:16:12.60711238 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.615 [INFO][7832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.618 [INFO][7832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.621 [INFO][7832] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.622 [INFO][7832] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.623 [INFO][7832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.623 [INFO][7832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.625 [INFO][7832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.628 [INFO][7832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.632 [INFO][7832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.134/26] block=192.168.33.128/26 handle="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.632 [INFO][7832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.134/26] handle="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.632 [INFO][7832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:12.644248 containerd[2803]: 2025-07-07 02:16:12.632 [INFO][7832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.134/26] IPv6=[] ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" HandleID="k8s-pod-network.90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.644722 containerd[2803]: 2025-07-07 02:16:12.634 [INFO][7762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0", GenerateName:"calico-apiserver-fc4f5fbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"a28c1297-214b-4a2e-af80-83a741733d7e", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4f5fbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"calico-apiserver-fc4f5fbcf-wqtv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c21cc7a98e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:12.644722 containerd[2803]: 2025-07-07 02:16:12.634 [INFO][7762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.134/32] ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.644722 containerd[2803]: 2025-07-07 02:16:12.634 [INFO][7762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c21cc7a98e ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.644722 containerd[2803]: 2025-07-07 02:16:12.636 [INFO][7762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.644722 containerd[2803]: 2025-07-07 02:16:12.636 [INFO][7762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0", GenerateName:"calico-apiserver-fc4f5fbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"a28c1297-214b-4a2e-af80-83a741733d7e", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4f5fbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b", Pod:"calico-apiserver-fc4f5fbcf-wqtv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c21cc7a98e", MAC:"92:e9:1f:d0:9c:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:12.644722 containerd[2803]: 2025-07-07 02:16:12.643 [INFO][7762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-wqtv7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--wqtv7-eth0" Jul 7 02:16:12.646388 kubelet[4310]: I0707 02:16:12.646345 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-527gm" podStartSLOduration=31.646325921 podStartE2EDuration="31.646325921s" podCreationTimestamp="2025-07-07 02:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:16:12.645954297 +0000 UTC m=+39.160599432" watchObservedRunningTime="2025-07-07 02:16:12.646325921 +0000 UTC m=+39.160971056" Jul 7 02:16:12.652908 kubelet[4310]: I0707 02:16:12.652867 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-674744f469-4r7kc" podStartSLOduration=23.827305797 podStartE2EDuration="24.652853605s" podCreationTimestamp="2025-07-07 02:15:48 +0000 UTC" firstStartedPulling="2025-07-07 02:16:11.695207946 +0000 UTC m=+38.209853081" lastFinishedPulling="2025-07-07 02:16:12.520755754 +0000 UTC m=+39.035400889" observedRunningTime="2025-07-07 02:16:12.65273745 +0000 UTC m=+39.167382585" watchObservedRunningTime="2025-07-07 02:16:12.652853605 +0000 UTC m=+39.167498740" Jul 7 02:16:12.654848 containerd[2803]: time="2025-07-07T02:16:12.654783363Z" level=info msg="connecting to shim 90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b" address="unix:///run/containerd/s/16a97922112f58709cb72dcbb9d51ef1b45e0a937e3eb6b3f905891f2509b575" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:12.661762 systemd-networkd[2713]: cali833de639dae: Gained IPv6LL Jul 7 02:16:12.675954 systemd[1]: Started cri-containerd-90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b.scope - libcontainer container 90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b. Jul 7 02:16:12.702504 containerd[2803]: time="2025-07-07T02:16:12.702468266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4f5fbcf-wqtv7,Uid:a28c1297-214b-4a2e-af80-83a741733d7e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b\"" Jul 7 02:16:12.704319 containerd[2803]: time="2025-07-07T02:16:12.704295788Z" level=info msg="CreateContainer within sandbox \"90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:16:12.707742 containerd[2803]: time="2025-07-07T02:16:12.707716724Z" level=info msg="Container 42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:12.710731 containerd[2803]: time="2025-07-07T02:16:12.710707557Z" level=info msg="CreateContainer within sandbox \"90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a\"" Jul 7 02:16:12.711054 containerd[2803]: time="2025-07-07T02:16:12.711033823Z" level=info msg="StartContainer for \"42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a\"" Jul 7 02:16:12.711958 containerd[2803]: time="2025-07-07T02:16:12.711933865Z" level=info msg="connecting to shim 42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a" address="unix:///run/containerd/s/16a97922112f58709cb72dcbb9d51ef1b45e0a937e3eb6b3f905891f2509b575" protocol=ttrpc version=3 Jul 7 02:16:12.714083 containerd[2803]: time="2025-07-07T02:16:12.714062815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"f38d31d0bd23c9a3291bb9923a563bdf55bbc6b2e73d78328f156c823be6f2d2\" pid:7922 exit_status:1 exited_at:{seconds:1751854572 nanos:713832025}" Jul 7 02:16:12.735481 systemd-networkd[2713]: cali3b3cee75e7a: Link UP Jul 7 02:16:12.735698 systemd-networkd[2713]: cali3b3cee75e7a: Gained carrier Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.575 [INFO][7765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.586 [INFO][7765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0 csi-node-driver- calico-system 620039ac-83fb-48da-b837-6597b522186c 681 0 2025-07-07 02:15:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b csi-node-driver-v4gts eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b3cee75e7a [] [] }} ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.586 [INFO][7765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" HandleID="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Workload="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" HandleID="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Workload="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"csi-node-driver-v4gts", "timestamp":"2025-07-07 02:16:12.607441606 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.607 [INFO][7831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.632 [INFO][7831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.632 [INFO][7831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.715 [INFO][7831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.718 [INFO][7831] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.721 [INFO][7831] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.723 [INFO][7831] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.724 [INFO][7831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.724 [INFO][7831] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.725 [INFO][7831] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316 Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.728 [INFO][7831] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.732 [INFO][7831] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.135/26] block=192.168.33.128/26 handle="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.732 [INFO][7831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.135/26] handle="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.732 [INFO][7831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:12.742935 containerd[2803]: 2025-07-07 02:16:12.732 [INFO][7831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.135/26] IPv6=[] ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" HandleID="k8s-pod-network.a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Workload="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.743346 containerd[2803]: 2025-07-07 02:16:12.734 [INFO][7765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"620039ac-83fb-48da-b837-6597b522186c", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"csi-node-driver-v4gts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b3cee75e7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:12.743346 containerd[2803]: 2025-07-07 02:16:12.734 [INFO][7765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.135/32] ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.743346 containerd[2803]: 2025-07-07 02:16:12.734 [INFO][7765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b3cee75e7a ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.743346 containerd[2803]: 2025-07-07 02:16:12.735 [INFO][7765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.743346 containerd[2803]: 2025-07-07 02:16:12.735 [INFO][7765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"620039ac-83fb-48da-b837-6597b522186c", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316", Pod:"csi-node-driver-v4gts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b3cee75e7a", MAC:"f6:f6:df:1d:ad:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:12.743346 containerd[2803]: 2025-07-07 02:16:12.741 [INFO][7765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" Namespace="calico-system" Pod="csi-node-driver-v4gts" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-csi--node--driver--v4gts-eth0" Jul 7 02:16:12.743806 systemd[1]: Started cri-containerd-42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a.scope - libcontainer container 42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a. Jul 7 02:16:12.752000 containerd[2803]: time="2025-07-07T02:16:12.751971131Z" level=info msg="connecting to shim a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316" address="unix:///run/containerd/s/2cc950e877f4803645be6a51d218c358eb249c502a22ea5386cbc7893ddcd39d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:12.789889 systemd[1]: Started cri-containerd-a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316.scope - libcontainer container a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316. Jul 7 02:16:12.792225 containerd[2803]: time="2025-07-07T02:16:12.792195469Z" level=info msg="StartContainer for \"42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a\" returns successfully" Jul 7 02:16:12.808455 containerd[2803]: time="2025-07-07T02:16:12.808428223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4gts,Uid:620039ac-83fb-48da-b837-6597b522186c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316\"" Jul 7 02:16:12.809519 containerd[2803]: time="2025-07-07T02:16:12.809492898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 02:16:13.044780 systemd-networkd[2713]: calibd5195d3588: Gained IPv6LL Jul 7 02:16:13.177209 containerd[2803]: time="2025-07-07T02:16:13.177175075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:13.177262 containerd[2803]: time="2025-07-07T02:16:13.177221433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 02:16:13.177863 containerd[2803]: time="2025-07-07T02:16:13.177839528Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:13.179313 containerd[2803]: time="2025-07-07T02:16:13.179289669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:13.179928 containerd[2803]: time="2025-07-07T02:16:13.179897085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 370.376468ms" Jul 7 02:16:13.179953 containerd[2803]: time="2025-07-07T02:16:13.179926203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 02:16:13.181541 containerd[2803]: time="2025-07-07T02:16:13.181517139Z" level=info msg="CreateContainer within sandbox \"a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 02:16:13.189163 containerd[2803]: time="2025-07-07T02:16:13.189128509Z" level=info msg="Container 3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:13.193013 containerd[2803]: time="2025-07-07T02:16:13.192982153Z" level=info msg="CreateContainer within sandbox \"a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219\"" Jul 7 02:16:13.193382 containerd[2803]: time="2025-07-07T02:16:13.193356098Z" level=info msg="StartContainer for \"3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219\"" Jul 7 02:16:13.194650 containerd[2803]: time="2025-07-07T02:16:13.194626806Z" level=info msg="connecting to shim 3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219" address="unix:///run/containerd/s/2cc950e877f4803645be6a51d218c358eb249c502a22ea5386cbc7893ddcd39d" protocol=ttrpc version=3 Jul 7 02:16:13.224807 systemd[1]: Started cri-containerd-3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219.scope - libcontainer container 3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219. Jul 7 02:16:13.252070 containerd[2803]: time="2025-07-07T02:16:13.252042192Z" level=info msg="StartContainer for \"3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219\" returns successfully" Jul 7 02:16:13.252879 containerd[2803]: time="2025-07-07T02:16:13.252860519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 02:16:13.558376 containerd[2803]: time="2025-07-07T02:16:13.558340823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7hwxq,Uid:67f18952-866b-40bb-a879-3ff8e070364a,Namespace:kube-system,Attempt:0,}" Jul 7 02:16:13.637325 systemd-networkd[2713]: calia965566ac59: Link UP Jul 7 02:16:13.637573 systemd-networkd[2713]: calia965566ac59: Gained carrier Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.579 [INFO][8197] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.589 [INFO][8197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0 coredns-668d6bf9bc- kube-system 67f18952-866b-40bb-a879-3ff8e070364a 796 0 2025-07-07 02:15:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b coredns-668d6bf9bc-7hwxq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia965566ac59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.589 [INFO][8197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.610 [INFO][8223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" HandleID="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.610 [INFO][8223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" HandleID="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"coredns-668d6bf9bc-7hwxq", "timestamp":"2025-07-07 02:16:13.610134478 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.610 [INFO][8223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.610 [INFO][8223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.610 [INFO][8223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.617 [INFO][8223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.620 [INFO][8223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.623 [INFO][8223] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.624 [INFO][8223] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.626 [INFO][8223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.626 [INFO][8223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.628 [INFO][8223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.630 [INFO][8223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.634 [INFO][8223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.136/26] block=192.168.33.128/26 handle="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.634 [INFO][8223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.136/26] handle="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.634 [INFO][8223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:13.645642 containerd[2803]: 2025-07-07 02:16:13.634 [INFO][8223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.136/26] IPv6=[] ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" HandleID="k8s-pod-network.52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.646315 containerd[2803]: 2025-07-07 02:16:13.636 [INFO][8197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"67f18952-866b-40bb-a879-3ff8e070364a", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"coredns-668d6bf9bc-7hwxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia965566ac59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:13.646315 containerd[2803]: 2025-07-07 02:16:13.636 [INFO][8197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.136/32] ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.646315 containerd[2803]: 2025-07-07 02:16:13.636 [INFO][8197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia965566ac59 ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.646315 containerd[2803]: 2025-07-07 02:16:13.637 [INFO][8197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.646315 containerd[2803]: 2025-07-07 02:16:13.637 [INFO][8197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"67f18952-866b-40bb-a879-3ff8e070364a", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f", Pod:"coredns-668d6bf9bc-7hwxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia965566ac59", MAC:"e2:34:04:56:f5:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:13.646315 containerd[2803]: 2025-07-07 02:16:13.643 [INFO][8197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" Namespace="kube-system" Pod="coredns-668d6bf9bc-7hwxq" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-coredns--668d6bf9bc--7hwxq-eth0" Jul 7 02:16:13.651877 kubelet[4310]: I0707 02:16:13.651827 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc4f5fbcf-wqtv7" podStartSLOduration=24.651808424 podStartE2EDuration="24.651808424s" podCreationTimestamp="2025-07-07 02:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:16:13.651305245 +0000 UTC m=+40.165950380" watchObservedRunningTime="2025-07-07 02:16:13.651808424 +0000 UTC m=+40.166453559" Jul 7 02:16:13.655403 containerd[2803]: time="2025-07-07T02:16:13.655367160Z" level=info msg="connecting to shim 52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f" address="unix:///run/containerd/s/353d898b525616d48dff962259ad05aface1e63c6b25f4f07db7e3f4d36803b6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:13.677814 systemd[1]: Started cri-containerd-52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f.scope - libcontainer container 52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f. Jul 7 02:16:13.705697 containerd[2803]: time="2025-07-07T02:16:13.705659156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7hwxq,Uid:67f18952-866b-40bb-a879-3ff8e070364a,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f\"" Jul 7 02:16:13.707565 containerd[2803]: time="2025-07-07T02:16:13.707535599Z" level=info msg="CreateContainer within sandbox \"52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 02:16:13.714314 containerd[2803]: time="2025-07-07T02:16:13.714284085Z" level=info msg="Container 2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:13.714706 containerd[2803]: time="2025-07-07T02:16:13.714686629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:13.714769 containerd[2803]: time="2025-07-07T02:16:13.714697228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 02:16:13.715380 containerd[2803]: time="2025-07-07T02:16:13.715360521Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:13.716861 containerd[2803]: time="2025-07-07T02:16:13.716839901Z" level=info msg="CreateContainer within sandbox \"52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4\"" Jul 7 02:16:13.716890 containerd[2803]: time="2025-07-07T02:16:13.716858700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:16:13.717244 containerd[2803]: time="2025-07-07T02:16:13.717170768Z" level=info msg="StartContainer for \"2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4\"" Jul 7 02:16:13.717875 containerd[2803]: time="2025-07-07T02:16:13.717843820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 464.952343ms" Jul 7 02:16:13.717906 containerd[2803]: time="2025-07-07T02:16:13.717879819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 02:16:13.717927 containerd[2803]: time="2025-07-07T02:16:13.717892738Z" level=info msg="connecting to shim 2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4" address="unix:///run/containerd/s/353d898b525616d48dff962259ad05aface1e63c6b25f4f07db7e3f4d36803b6" protocol=ttrpc version=3 Jul 7 02:16:13.719369 containerd[2803]: time="2025-07-07T02:16:13.719345879Z" level=info msg="CreateContainer within sandbox \"a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 02:16:13.725040 containerd[2803]: time="2025-07-07T02:16:13.725001689Z" level=info msg="Container 06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:13.729462 containerd[2803]: time="2025-07-07T02:16:13.729424750Z" level=info msg="CreateContainer within sandbox \"a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3\"" Jul 7 02:16:13.729768 containerd[2803]: time="2025-07-07T02:16:13.729744377Z" level=info msg="StartContainer for \"06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3\"" Jul 7 02:16:13.731105 containerd[2803]: time="2025-07-07T02:16:13.731080922Z" level=info msg="connecting to shim 06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3" address="unix:///run/containerd/s/2cc950e877f4803645be6a51d218c358eb249c502a22ea5386cbc7893ddcd39d" protocol=ttrpc version=3 Jul 7 02:16:13.751809 systemd[1]: Started cri-containerd-2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4.scope - libcontainer container 2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4. Jul 7 02:16:13.754280 systemd[1]: Started cri-containerd-06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3.scope - libcontainer container 06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3. Jul 7 02:16:13.773125 containerd[2803]: time="2025-07-07T02:16:13.773098135Z" level=info msg="StartContainer for \"2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4\" returns successfully" Jul 7 02:16:13.782877 containerd[2803]: time="2025-07-07T02:16:13.782849658Z" level=info msg="StartContainer for \"06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3\" returns successfully" Jul 7 02:16:14.452807 systemd-networkd[2713]: cali1c21cc7a98e: Gained IPv6LL Jul 7 02:16:14.516773 systemd-networkd[2713]: cali3b3cee75e7a: Gained IPv6LL Jul 7 02:16:14.606873 kubelet[4310]: I0707 02:16:14.606852 4310 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 02:16:14.607102 kubelet[4310]: I0707 02:16:14.606879 4310 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 02:16:14.649623 kubelet[4310]: I0707 02:16:14.649601 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:14.656487 kubelet[4310]: I0707 02:16:14.656443 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v4gts" podStartSLOduration=19.747478278 podStartE2EDuration="20.656432736s" podCreationTimestamp="2025-07-07 02:15:54 +0000 UTC" firstStartedPulling="2025-07-07 02:16:12.809286546 +0000 UTC m=+39.323931681" lastFinishedPulling="2025-07-07 02:16:13.718241004 +0000 UTC m=+40.232886139" observedRunningTime="2025-07-07 02:16:14.656269662 +0000 UTC m=+41.170914797" watchObservedRunningTime="2025-07-07 02:16:14.656432736 +0000 UTC m=+41.171077831" Jul 7 02:16:14.666134 kubelet[4310]: I0707 02:16:14.666080 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7hwxq" podStartSLOduration=33.666066199 podStartE2EDuration="33.666066199s" podCreationTimestamp="2025-07-07 02:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:16:14.665567219 +0000 UTC m=+41.180212354" watchObservedRunningTime="2025-07-07 02:16:14.666066199 +0000 UTC m=+41.180711334" Jul 7 02:16:15.559509 containerd[2803]: time="2025-07-07T02:16:15.559463540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-z27j7,Uid:1bbf5233-af42-4f01-bf6f-df8d8e280cbf,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:15.606424 systemd-networkd[2713]: calia965566ac59: Gained IPv6LL Jul 7 02:16:15.654729 systemd-networkd[2713]: calif7b667a5d5a: Link UP Jul 7 02:16:15.654915 systemd-networkd[2713]: calif7b667a5d5a: Gained carrier Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.595 [INFO][8548] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.605 [INFO][8548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0 calico-apiserver-674744f469- calico-apiserver 1bbf5233-af42-4f01-bf6f-df8d8e280cbf 793 0 2025-07-07 02:15:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674744f469 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b calico-apiserver-674744f469-z27j7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7b667a5d5a [] [] }} ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.605 [INFO][8548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.626 [INFO][8573] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.626 [INFO][8573] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"calico-apiserver-674744f469-z27j7", "timestamp":"2025-07-07 02:16:15.62619447 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.626 [INFO][8573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.626 [INFO][8573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.626 [INFO][8573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.634 [INFO][8573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.637 [INFO][8573] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.640 [INFO][8573] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.641 [INFO][8573] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.643 [INFO][8573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.643 [INFO][8573] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.644 [INFO][8573] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.646 [INFO][8573] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.650 [INFO][8573] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.137/26] block=192.168.33.128/26 handle="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.650 [INFO][8573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.137/26] handle="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.650 [INFO][8573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:15.661953 containerd[2803]: 2025-07-07 02:16:15.650 [INFO][8573] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.137/26] IPv6=[] ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.662462 containerd[2803]: 2025-07-07 02:16:15.652 [INFO][8548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0", GenerateName:"calico-apiserver-674744f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"1bbf5233-af42-4f01-bf6f-df8d8e280cbf", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674744f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"calico-apiserver-674744f469-z27j7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7b667a5d5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:15.662462 containerd[2803]: 2025-07-07 02:16:15.652 [INFO][8548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.137/32] ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.662462 containerd[2803]: 2025-07-07 02:16:15.652 [INFO][8548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7b667a5d5a ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.662462 containerd[2803]: 2025-07-07 02:16:15.654 [INFO][8548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.662462 containerd[2803]: 2025-07-07 02:16:15.655 [INFO][8548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0", GenerateName:"calico-apiserver-674744f469-", Namespace:"calico-apiserver", SelfLink:"", UID:"1bbf5233-af42-4f01-bf6f-df8d8e280cbf", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674744f469", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f", Pod:"calico-apiserver-674744f469-z27j7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7b667a5d5a", MAC:"ee:5a:7c:53:b0:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:15.662462 containerd[2803]: 2025-07-07 02:16:15.660 [INFO][8548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Namespace="calico-apiserver" Pod="calico-apiserver-674744f469-z27j7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:15.672046 containerd[2803]: time="2025-07-07T02:16:15.672013306Z" level=info msg="connecting to shim 07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" address="unix:///run/containerd/s/53052efd21183786350914e760d1a3cba522e5f3c6f5195f46ba027899fe6f94" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:15.700844 systemd[1]: Started cri-containerd-07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f.scope - libcontainer container 07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f. Jul 7 02:16:15.727386 containerd[2803]: time="2025-07-07T02:16:15.727359144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674744f469-z27j7,Uid:1bbf5233-af42-4f01-bf6f-df8d8e280cbf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\"" Jul 7 02:16:15.729089 containerd[2803]: time="2025-07-07T02:16:15.729068400Z" level=info msg="CreateContainer within sandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:16:15.732735 containerd[2803]: time="2025-07-07T02:16:15.732707903Z" level=info msg="Container 2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:15.735853 containerd[2803]: time="2025-07-07T02:16:15.735820466Z" level=info msg="CreateContainer within sandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\"" Jul 7 02:16:15.736259 containerd[2803]: time="2025-07-07T02:16:15.736236810Z" level=info msg="StartContainer for \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\"" Jul 7 02:16:15.737190 containerd[2803]: time="2025-07-07T02:16:15.737166095Z" level=info msg="connecting to shim 2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6" address="unix:///run/containerd/s/53052efd21183786350914e760d1a3cba522e5f3c6f5195f46ba027899fe6f94" protocol=ttrpc version=3 Jul 7 02:16:15.765864 systemd[1]: Started cri-containerd-2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6.scope - libcontainer container 2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6. Jul 7 02:16:15.794554 containerd[2803]: time="2025-07-07T02:16:15.794529857Z" level=info msg="StartContainer for \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" returns successfully" Jul 7 02:16:16.665115 kubelet[4310]: I0707 02:16:16.665062 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-674744f469-z27j7" podStartSLOduration=28.66504594 podStartE2EDuration="28.66504594s" podCreationTimestamp="2025-07-07 02:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:16:16.664796309 +0000 UTC m=+43.179441444" watchObservedRunningTime="2025-07-07 02:16:16.66504594 +0000 UTC m=+43.179691075" Jul 7 02:16:17.268845 systemd-networkd[2713]: calif7b667a5d5a: Gained IPv6LL Jul 7 02:16:19.720476 kubelet[4310]: I0707 02:16:19.720401 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:19.747650 containerd[2803]: time="2025-07-07T02:16:19.747610045Z" level=info msg="StopContainer for \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" with timeout 30 (s)" Jul 7 02:16:19.747966 containerd[2803]: time="2025-07-07T02:16:19.747946554Z" level=info msg="Stop container \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" with signal terminated" Jul 7 02:16:19.756822 systemd[1]: cri-containerd-2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6.scope: Deactivated successfully. Jul 7 02:16:19.757129 systemd[1]: cri-containerd-2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6.scope: Consumed 1.333s CPU time, 67.5M memory peak. Jul 7 02:16:19.757827 containerd[2803]: time="2025-07-07T02:16:19.757802233Z" level=info msg="received exit event container_id:\"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" id:\"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" pid:8659 exit_status:1 exited_at:{seconds:1751854579 nanos:757609119}" Jul 7 02:16:19.757912 containerd[2803]: time="2025-07-07T02:16:19.757887150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" id:\"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" pid:8659 exit_status:1 exited_at:{seconds:1751854579 nanos:757609119}" Jul 7 02:16:19.774976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6-rootfs.mount: Deactivated successfully. Jul 7 02:16:19.781586 systemd[1]: Created slice kubepods-besteffort-pod1c323b12_2b45_452a_8cd3_8275ca94272a.slice - libcontainer container kubepods-besteffort-pod1c323b12_2b45_452a_8cd3_8275ca94272a.slice. Jul 7 02:16:19.829488 kubelet[4310]: I0707 02:16:19.829444 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c323b12-2b45-452a-8cd3-8275ca94272a-calico-apiserver-certs\") pod \"calico-apiserver-fc4f5fbcf-tm2d7\" (UID: \"1c323b12-2b45-452a-8cd3-8275ca94272a\") " pod="calico-apiserver/calico-apiserver-fc4f5fbcf-tm2d7" Jul 7 02:16:19.829601 kubelet[4310]: I0707 02:16:19.829504 4310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqrpc\" (UniqueName: \"kubernetes.io/projected/1c323b12-2b45-452a-8cd3-8275ca94272a-kube-api-access-qqrpc\") pod \"calico-apiserver-fc4f5fbcf-tm2d7\" (UID: \"1c323b12-2b45-452a-8cd3-8275ca94272a\") " pod="calico-apiserver/calico-apiserver-fc4f5fbcf-tm2d7" Jul 7 02:16:19.865532 kubelet[4310]: I0707 02:16:19.865503 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:20.092474 containerd[2803]: time="2025-07-07T02:16:20.092443372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4f5fbcf-tm2d7,Uid:1c323b12-2b45-452a-8cd3-8275ca94272a,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:16:20.184245 systemd-networkd[2713]: cali7c0d42a0fd6: Link UP Jul 7 02:16:20.184428 systemd-networkd[2713]: cali7c0d42a0fd6: Gained carrier Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.124 [INFO][9020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0 calico-apiserver-fc4f5fbcf- calico-apiserver 1c323b12-2b45-452a-8cd3-8275ca94272a 1045 0 2025-07-07 02:16:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc4f5fbcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-e89e5d604b calico-apiserver-fc4f5fbcf-tm2d7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c0d42a0fd6 [] [] }} ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.124 [INFO][9020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.145 [INFO][9046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" HandleID="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.145 [INFO][9046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" HandleID="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000455240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-e89e5d604b", "pod":"calico-apiserver-fc4f5fbcf-tm2d7", "timestamp":"2025-07-07 02:16:20.14509459 +0000 UTC"}, Hostname:"ci-4372.0.1-a-e89e5d604b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.145 [INFO][9046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.145 [INFO][9046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.145 [INFO][9046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-e89e5d604b' Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.153 [INFO][9046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.156 [INFO][9046] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.159 [INFO][9046] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.160 [INFO][9046] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.162 [INFO][9046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.162 [INFO][9046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.163 [INFO][9046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.165 [INFO][9046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.173 [INFO][9046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.138/26] block=192.168.33.128/26 handle="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.173 [INFO][9046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.138/26] handle="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" host="ci-4372.0.1-a-e89e5d604b" Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.173 [INFO][9046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:20.191838 containerd[2803]: 2025-07-07 02:16:20.173 [INFO][9046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.138/26] IPv6=[] ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" HandleID="k8s-pod-network.4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.192299 containerd[2803]: 2025-07-07 02:16:20.182 [INFO][9020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0", GenerateName:"calico-apiserver-fc4f5fbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c323b12-2b45-452a-8cd3-8275ca94272a", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4f5fbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"", Pod:"calico-apiserver-fc4f5fbcf-tm2d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c0d42a0fd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:20.192299 containerd[2803]: 2025-07-07 02:16:20.182 [INFO][9020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.138/32] ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.192299 containerd[2803]: 2025-07-07 02:16:20.182 [INFO][9020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c0d42a0fd6 ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.192299 containerd[2803]: 2025-07-07 02:16:20.184 [INFO][9020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.192299 containerd[2803]: 2025-07-07 02:16:20.184 [INFO][9020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0", GenerateName:"calico-apiserver-fc4f5fbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c323b12-2b45-452a-8cd3-8275ca94272a", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4f5fbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-e89e5d604b", ContainerID:"4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f", Pod:"calico-apiserver-fc4f5fbcf-tm2d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c0d42a0fd6", MAC:"9a:2c:e8:fc:1b:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:16:20.192299 containerd[2803]: 2025-07-07 02:16:20.190 [INFO][9020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" Namespace="calico-apiserver" Pod="calico-apiserver-fc4f5fbcf-tm2d7" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--fc4f5fbcf--tm2d7-eth0" Jul 7 02:16:20.293512 systemd-networkd[2713]: vxlan.calico: Link UP Jul 7 02:16:20.293517 systemd-networkd[2713]: vxlan.calico: Gained carrier Jul 7 02:16:20.347020 containerd[2803]: time="2025-07-07T02:16:20.346927779Z" level=info msg="connecting to shim 4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f" address="unix:///run/containerd/s/bb332757a3cce8b5d1a99f3dca7ab2792a4c01c1542e2f5c47c8ae8eca0595c5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 02:16:20.375801 systemd[1]: Started cri-containerd-4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f.scope - libcontainer container 4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f. Jul 7 02:16:20.418469 containerd[2803]: time="2025-07-07T02:16:20.418438042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4f5fbcf-tm2d7,Uid:1c323b12-2b45-452a-8cd3-8275ca94272a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f\"" Jul 7 02:16:20.420326 containerd[2803]: time="2025-07-07T02:16:20.420304183Z" level=info msg="CreateContainer within sandbox \"4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:16:20.441867 containerd[2803]: time="2025-07-07T02:16:20.441835623Z" level=info msg="Container 46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92: CDI devices from CRI Config.CDIDevices: []" Jul 7 02:16:20.463902 containerd[2803]: time="2025-07-07T02:16:20.463876687Z" level=info msg="CreateContainer within sandbox \"4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92\"" Jul 7 02:16:20.464259 containerd[2803]: time="2025-07-07T02:16:20.464199637Z" level=info msg="StartContainer for \"46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92\"" Jul 7 02:16:20.465100 containerd[2803]: time="2025-07-07T02:16:20.465081289Z" level=info msg="connecting to shim 46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92" address="unix:///run/containerd/s/bb332757a3cce8b5d1a99f3dca7ab2792a4c01c1542e2f5c47c8ae8eca0595c5" protocol=ttrpc version=3 Jul 7 02:16:20.488804 systemd[1]: Started cri-containerd-46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92.scope - libcontainer container 46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92. Jul 7 02:16:20.498771 containerd[2803]: time="2025-07-07T02:16:20.498678029Z" level=info msg="StopContainer for \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" returns successfully" Jul 7 02:16:20.499156 containerd[2803]: time="2025-07-07T02:16:20.499130855Z" level=info msg="StopPodSandbox for \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\"" Jul 7 02:16:20.499211 containerd[2803]: time="2025-07-07T02:16:20.499194853Z" level=info msg="Container to stop \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 02:16:20.504530 systemd[1]: cri-containerd-07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f.scope: Deactivated successfully. Jul 7 02:16:20.511045 containerd[2803]: time="2025-07-07T02:16:20.511021039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" id:\"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" pid:8634 exit_status:137 exited_at:{seconds:1751854580 nanos:510854325}" Jul 7 02:16:20.519153 containerd[2803]: time="2025-07-07T02:16:20.519127703Z" level=info msg="StartContainer for \"46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92\" returns successfully" Jul 7 02:16:20.531429 containerd[2803]: time="2025-07-07T02:16:20.531402996Z" level=info msg="shim disconnected" id=07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f namespace=k8s.io Jul 7 02:16:20.531553 containerd[2803]: time="2025-07-07T02:16:20.531431875Z" level=warning msg="cleaning up after shim disconnected" id=07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f namespace=k8s.io Jul 7 02:16:20.531553 containerd[2803]: time="2025-07-07T02:16:20.531461314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:16:20.540892 containerd[2803]: time="2025-07-07T02:16:20.540862697Z" level=info msg="received exit event sandbox_id:\"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" exit_status:137 exited_at:{seconds:1751854580 nanos:510854325}" Jul 7 02:16:20.575964 systemd-networkd[2713]: calif7b667a5d5a: Link DOWN Jul 7 02:16:20.575968 systemd-networkd[2713]: calif7b667a5d5a: Lost carrier Jul 7 02:16:20.667778 kubelet[4310]: I0707 02:16:20.667708 4310 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:20.673906 kubelet[4310]: I0707 02:16:20.673867 4310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc4f5fbcf-tm2d7" podStartSLOduration=1.673852899 podStartE2EDuration="1.673852899s" podCreationTimestamp="2025-07-07 02:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:16:20.673638666 +0000 UTC m=+47.188283761" watchObservedRunningTime="2025-07-07 02:16:20.673852899 +0000 UTC m=+47.188498034" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.575 [INFO][9468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.575 [INFO][9468] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" iface="eth0" netns="/var/run/netns/cni-fc756de3-b32d-f735-6b4e-b4d6ceff2df7" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.575 [INFO][9468] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" iface="eth0" netns="/var/run/netns/cni-fc756de3-b32d-f735-6b4e-b4d6ceff2df7" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.620 [INFO][9468] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" after=45.422446ms iface="eth0" netns="/var/run/netns/cni-fc756de3-b32d-f735-6b4e-b4d6ceff2df7" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.620 [INFO][9468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.620 [INFO][9468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.638 [INFO][9514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.638 [INFO][9514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.638 [INFO][9514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.672 [INFO][9514] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.672 [INFO][9514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.673 [INFO][9514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:20.676997 containerd[2803]: 2025-07-07 02:16:20.675 [INFO][9468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:20.677272 containerd[2803]: time="2025-07-07T02:16:20.677178034Z" level=info msg="TearDown network for sandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" successfully" Jul 7 02:16:20.677272 containerd[2803]: time="2025-07-07T02:16:20.677200674Z" level=info msg="StopPodSandbox for \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" returns successfully" Jul 7 02:16:20.734590 kubelet[4310]: I0707 02:16:20.734536 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-calico-apiserver-certs\") pod \"1bbf5233-af42-4f01-bf6f-df8d8e280cbf\" (UID: \"1bbf5233-af42-4f01-bf6f-df8d8e280cbf\") " Jul 7 02:16:20.734590 kubelet[4310]: I0707 02:16:20.734591 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56mv4\" (UniqueName: \"kubernetes.io/projected/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-kube-api-access-56mv4\") pod \"1bbf5233-af42-4f01-bf6f-df8d8e280cbf\" (UID: \"1bbf5233-af42-4f01-bf6f-df8d8e280cbf\") " Jul 7 02:16:20.736882 kubelet[4310]: I0707 02:16:20.736856 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-kube-api-access-56mv4" (OuterVolumeSpecName: "kube-api-access-56mv4") pod "1bbf5233-af42-4f01-bf6f-df8d8e280cbf" (UID: "1bbf5233-af42-4f01-bf6f-df8d8e280cbf"). InnerVolumeSpecName "kube-api-access-56mv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 02:16:20.736919 kubelet[4310]: I0707 02:16:20.736906 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "1bbf5233-af42-4f01-bf6f-df8d8e280cbf" (UID: "1bbf5233-af42-4f01-bf6f-df8d8e280cbf"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 02:16:20.776594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f-rootfs.mount: Deactivated successfully. Jul 7 02:16:20.776670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f-shm.mount: Deactivated successfully. Jul 7 02:16:20.776725 systemd[1]: run-netns-cni\x2dfc756de3\x2db32d\x2df735\x2d6b4e\x2db4d6ceff2df7.mount: Deactivated successfully. Jul 7 02:16:20.776781 systemd[1]: var-lib-kubelet-pods-1bbf5233\x2daf42\x2d4f01\x2dbf6f\x2ddf8d8e280cbf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56mv4.mount: Deactivated successfully. Jul 7 02:16:20.776829 systemd[1]: var-lib-kubelet-pods-1bbf5233\x2daf42\x2d4f01\x2dbf6f\x2ddf8d8e280cbf-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 02:16:20.835294 kubelet[4310]: I0707 02:16:20.835266 4310 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-56mv4\" (UniqueName: \"kubernetes.io/projected/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-kube-api-access-56mv4\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:20.835360 kubelet[4310]: I0707 02:16:20.835298 4310 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1bbf5233-af42-4f01-bf6f-df8d8e280cbf-calico-apiserver-certs\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:21.562672 systemd[1]: Removed slice kubepods-besteffort-pod1bbf5233_af42_4f01_bf6f_df8d8e280cbf.slice - libcontainer container kubepods-besteffort-pod1bbf5233_af42_4f01_bf6f_df8d8e280cbf.slice. Jul 7 02:16:21.562779 systemd[1]: kubepods-besteffort-pod1bbf5233_af42_4f01_bf6f_df8d8e280cbf.slice: Consumed 1.349s CPU time, 68M memory peak. Jul 7 02:16:21.669669 kubelet[4310]: I0707 02:16:21.669634 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:21.748810 systemd-networkd[2713]: cali7c0d42a0fd6: Gained IPv6LL Jul 7 02:16:21.749086 systemd-networkd[2713]: vxlan.calico: Gained IPv6LL Jul 7 02:16:21.875831 containerd[2803]: time="2025-07-07T02:16:21.875803268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"d0f6f08e40b9a228c9bfc56fa45b050982c25d17a954f5a3c142322a15d57076\" pid:9648 exited_at:{seconds:1751854581 nanos:875538956}" Jul 7 02:16:23.559619 kubelet[4310]: I0707 02:16:23.559513 4310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbf5233-af42-4f01-bf6f-df8d8e280cbf" path="/var/lib/kubelet/pods/1bbf5233-af42-4f01-bf6f-df8d8e280cbf/volumes" Jul 7 02:16:25.373010 kubelet[4310]: I0707 02:16:25.372952 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:25.408646 containerd[2803]: time="2025-07-07T02:16:25.408612802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"877c2dbf776f26c9f61f13f2f37b5f5ccbb7c27a89b5b9b959247a7190d7fe97\" pid:9685 exited_at:{seconds:1751854585 nanos:408411048}" Jul 7 02:16:25.447601 containerd[2803]: time="2025-07-07T02:16:25.447563944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"3ce8165fc755a89d3af2d5759b0c82f231098e537a5db2efb4cfb78a312a4c5b\" pid:9706 exited_at:{seconds:1751854585 nanos:447425587}" Jul 7 02:16:33.549251 kubelet[4310]: I0707 02:16:33.549157 4310 scope.go:117] "RemoveContainer" containerID="2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6" Jul 7 02:16:33.550831 containerd[2803]: time="2025-07-07T02:16:33.550803568Z" level=info msg="RemoveContainer for \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\"" Jul 7 02:16:33.553461 containerd[2803]: time="2025-07-07T02:16:33.553437749Z" level=info msg="RemoveContainer for \"2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6\" returns successfully" Jul 7 02:16:33.554404 containerd[2803]: time="2025-07-07T02:16:33.554386607Z" level=info msg="StopPodSandbox for \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\"" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.584 [WARNING][9747] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.584 [INFO][9747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.584 [INFO][9747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" iface="eth0" netns="" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.584 [INFO][9747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.584 [INFO][9747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.602 [INFO][9770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.602 [INFO][9770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.602 [INFO][9770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.609 [WARNING][9770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.609 [INFO][9770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.610 [INFO][9770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:33.614090 containerd[2803]: 2025-07-07 02:16:33.612 [INFO][9747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.614337 containerd[2803]: time="2025-07-07T02:16:33.614121821Z" level=info msg="TearDown network for sandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" successfully" Jul 7 02:16:33.614337 containerd[2803]: time="2025-07-07T02:16:33.614141821Z" level=info msg="StopPodSandbox for \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" returns successfully" Jul 7 02:16:33.614562 containerd[2803]: time="2025-07-07T02:16:33.614541132Z" level=info msg="RemovePodSandbox for \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\"" Jul 7 02:16:33.614587 containerd[2803]: time="2025-07-07T02:16:33.614569171Z" level=info msg="Forcibly stopping sandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\"" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.644 [WARNING][9803] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.644 [INFO][9803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.644 [INFO][9803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" iface="eth0" netns="" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.644 [INFO][9803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.644 [INFO][9803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.662 [INFO][9824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.662 [INFO][9824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.662 [INFO][9824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.669 [WARNING][9824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.669 [INFO][9824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" HandleID="k8s-pod-network.07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--z27j7-eth0" Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.670 [INFO][9824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:33.674309 containerd[2803]: 2025-07-07 02:16:33.672 [INFO][9803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f" Jul 7 02:16:33.674680 containerd[2803]: time="2025-07-07T02:16:33.674355783Z" level=info msg="TearDown network for sandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" successfully" Jul 7 02:16:33.675917 containerd[2803]: time="2025-07-07T02:16:33.675892469Z" level=info msg="Ensure that sandbox 07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f in task-service has been cleanup successfully" Jul 7 02:16:33.676605 containerd[2803]: time="2025-07-07T02:16:33.676582613Z" level=info msg="RemovePodSandbox \"07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f\" returns successfully" Jul 7 02:16:38.372523 containerd[2803]: time="2025-07-07T02:16:38.372483186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"32bce77c22be34101dd433fb6182bce20ca91787f06fc0b6b2e9f7075d7c795e\" pid:9859 exited_at:{seconds:1751854598 nanos:372254151}" Jul 7 02:16:42.710037 containerd[2803]: time="2025-07-07T02:16:42.709976308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"25ed2cdf3aa9e4c3e990e862caa0a7a69d9195ee04cc227ecc4f8f9141160afa\" pid:9936 exited_at:{seconds:1751854602 nanos:709739753}" Jul 7 02:16:49.737754 kubelet[4310]: I0707 02:16:49.737693 4310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:16:49.765340 containerd[2803]: time="2025-07-07T02:16:49.765302331Z" level=info msg="StopContainer for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" with timeout 30 (s)" Jul 7 02:16:49.766074 containerd[2803]: time="2025-07-07T02:16:49.766052198Z" level=info msg="Stop container \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" with signal terminated" Jul 7 02:16:49.775387 systemd[1]: cri-containerd-1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e.scope: Deactivated successfully. Jul 7 02:16:49.776000 systemd[1]: cri-containerd-1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e.scope: Consumed 1.596s CPU time, 70.9M memory peak. Jul 7 02:16:49.776902 containerd[2803]: time="2025-07-07T02:16:49.776870762Z" level=info msg="received exit event container_id:\"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" id:\"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" pid:7785 exit_status:1 exited_at:{seconds:1751854609 nanos:776415130}" Jul 7 02:16:49.777203 containerd[2803]: time="2025-07-07T02:16:49.776934441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" id:\"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" pid:7785 exit_status:1 exited_at:{seconds:1751854609 nanos:776415130}" Jul 7 02:16:49.793605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e-rootfs.mount: Deactivated successfully. Jul 7 02:16:49.794848 containerd[2803]: time="2025-07-07T02:16:49.794820757Z" level=info msg="StopContainer for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" returns successfully" Jul 7 02:16:49.796210 containerd[2803]: time="2025-07-07T02:16:49.796110613Z" level=info msg="StopPodSandbox for \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\"" Jul 7 02:16:49.796210 containerd[2803]: time="2025-07-07T02:16:49.796176572Z" level=info msg="Container to stop \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 02:16:49.801707 systemd[1]: cri-containerd-049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83.scope: Deactivated successfully. Jul 7 02:16:49.802585 containerd[2803]: time="2025-07-07T02:16:49.802564016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" id:\"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" pid:7547 exit_status:137 exited_at:{seconds:1751854609 nanos:802356300}" Jul 7 02:16:49.820699 containerd[2803]: time="2025-07-07T02:16:49.820605610Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Jul 7 02:16:49.820699 containerd[2803]: time="2025-07-07T02:16:49.820667808Z" level=info msg="shim disconnected" id=049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 namespace=k8s.io Jul 7 02:16:49.820699 containerd[2803]: time="2025-07-07T02:16:49.820678568Z" level=warning msg="cleaning up after shim disconnected" id=049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 namespace=k8s.io Jul 7 02:16:49.820699 containerd[2803]: time="2025-07-07T02:16:49.820691088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:16:49.822298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83-rootfs.mount: Deactivated successfully. Jul 7 02:16:49.846444 containerd[2803]: time="2025-07-07T02:16:49.846391582Z" level=info msg="received exit event sandbox_id:\"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" exit_status:137 exited_at:{seconds:1751854609 nanos:802356300}" Jul 7 02:16:49.848399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83-shm.mount: Deactivated successfully. Jul 7 02:16:49.882837 systemd-networkd[2713]: cali833de639dae: Link DOWN Jul 7 02:16:49.882842 systemd-networkd[2713]: cali833de639dae: Lost carrier Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.882 [INFO][10044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.882 [INFO][10044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" iface="eth0" netns="/var/run/netns/cni-6326918f-be85-c60c-d0a1-32d296e1e6f2" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.882 [INFO][10044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" iface="eth0" netns="/var/run/netns/cni-6326918f-be85-c60c-d0a1-32d296e1e6f2" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.904 [INFO][10044] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" after=22.65191ms iface="eth0" netns="/var/run/netns/cni-6326918f-be85-c60c-d0a1-32d296e1e6f2" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.904 [INFO][10044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.904 [INFO][10044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.922 [INFO][10072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.922 [INFO][10072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.922 [INFO][10072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.946 [INFO][10072] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.946 [INFO][10072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.947 [INFO][10072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:16:49.949882 containerd[2803]: 2025-07-07 02:16:49.948 [INFO][10044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:16:49.950338 containerd[2803]: time="2025-07-07T02:16:49.950317100Z" level=info msg="TearDown network for sandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" successfully" Jul 7 02:16:49.950368 containerd[2803]: time="2025-07-07T02:16:49.950340339Z" level=info msg="StopPodSandbox for \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" returns successfully" Jul 7 02:16:49.952113 systemd[1]: run-netns-cni\x2d6326918f\x2dbe85\x2dc60c\x2dd0a1\x2d32d296e1e6f2.mount: Deactivated successfully. Jul 7 02:16:49.998163 kubelet[4310]: I0707 02:16:49.998069 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-calico-apiserver-certs\") pod \"19e3fd3f-6889-408b-8d4f-0c6f70c06d65\" (UID: \"19e3fd3f-6889-408b-8d4f-0c6f70c06d65\") " Jul 7 02:16:49.998163 kubelet[4310]: I0707 02:16:49.998123 4310 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcpsx\" (UniqueName: \"kubernetes.io/projected/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-kube-api-access-jcpsx\") pod \"19e3fd3f-6889-408b-8d4f-0c6f70c06d65\" (UID: \"19e3fd3f-6889-408b-8d4f-0c6f70c06d65\") " Jul 7 02:16:50.000394 kubelet[4310]: I0707 02:16:50.000368 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-kube-api-access-jcpsx" (OuterVolumeSpecName: "kube-api-access-jcpsx") pod "19e3fd3f-6889-408b-8d4f-0c6f70c06d65" (UID: "19e3fd3f-6889-408b-8d4f-0c6f70c06d65"). InnerVolumeSpecName "kube-api-access-jcpsx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 02:16:50.000456 kubelet[4310]: I0707 02:16:50.000422 4310 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "19e3fd3f-6889-408b-8d4f-0c6f70c06d65" (UID: "19e3fd3f-6889-408b-8d4f-0c6f70c06d65"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 02:16:50.002015 systemd[1]: var-lib-kubelet-pods-19e3fd3f\x2d6889\x2d408b\x2d8d4f\x2d0c6f70c06d65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djcpsx.mount: Deactivated successfully. Jul 7 02:16:50.002106 systemd[1]: var-lib-kubelet-pods-19e3fd3f\x2d6889\x2d408b\x2d8d4f\x2d0c6f70c06d65-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 02:16:50.099108 kubelet[4310]: I0707 02:16:50.099066 4310 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-calico-apiserver-certs\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:50.099108 kubelet[4310]: I0707 02:16:50.099097 4310 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jcpsx\" (UniqueName: \"kubernetes.io/projected/19e3fd3f-6889-408b-8d4f-0c6f70c06d65-kube-api-access-jcpsx\") on node \"ci-4372.0.1-a-e89e5d604b\" DevicePath \"\"" Jul 7 02:16:50.717416 kubelet[4310]: I0707 02:16:50.717360 4310 scope.go:117] "RemoveContainer" containerID="1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e" Jul 7 02:16:50.718800 containerd[2803]: time="2025-07-07T02:16:50.718772127Z" level=info msg="RemoveContainer for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\"" Jul 7 02:16:50.720991 systemd[1]: Removed slice kubepods-besteffort-pod19e3fd3f_6889_408b_8d4f_0c6f70c06d65.slice - libcontainer container kubepods-besteffort-pod19e3fd3f_6889_408b_8d4f_0c6f70c06d65.slice. Jul 7 02:16:50.721076 containerd[2803]: time="2025-07-07T02:16:50.720994408Z" level=info msg="RemoveContainer for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" returns successfully" Jul 7 02:16:50.721079 systemd[1]: kubepods-besteffort-pod19e3fd3f_6889_408b_8d4f_0c6f70c06d65.slice: Consumed 1.612s CPU time, 71.3M memory peak. Jul 7 02:16:50.721158 kubelet[4310]: I0707 02:16:50.721115 4310 scope.go:117] "RemoveContainer" containerID="1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e" Jul 7 02:16:50.721286 containerd[2803]: time="2025-07-07T02:16:50.721255043Z" level=error msg="ContainerStatus for \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\": not found" Jul 7 02:16:50.721380 kubelet[4310]: E0707 02:16:50.721358 4310 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\": not found" containerID="1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e" Jul 7 02:16:50.721456 kubelet[4310]: I0707 02:16:50.721384 4310 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e"} err="failed to get container status \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e\": not found" Jul 7 02:16:51.559905 kubelet[4310]: I0707 02:16:51.559861 4310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e3fd3f-6889-408b-8d4f-0c6f70c06d65" path="/var/lib/kubelet/pods/19e3fd3f-6889-408b-8d4f-0c6f70c06d65/volumes" Jul 7 02:16:55.446321 containerd[2803]: time="2025-07-07T02:16:55.446280889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"da1096ad526345721d5bdbc9b706095efecf2d1a57b437a0e592eb0f9607c8ee\" pid:10119 exited_at:{seconds:1751854615 nanos:446119997}" Jul 7 02:17:08.371720 containerd[2803]: time="2025-07-07T02:17:08.371666279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"3e3195691c327ca8b8fa58b930095decc7bb57b0cf280ed5abb54be6537347b2\" pid:10150 exited_at:{seconds:1751854628 nanos:371385906}" Jul 7 02:17:11.089311 containerd[2803]: time="2025-07-07T02:17:11.089267674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"f610ab4dcfd9e6e684e5c996c585fff7100614860c596ac62bf769d8f1edfd29\" pid:10187 exited_at:{seconds:1751854631 nanos:89108268}" Jul 7 02:17:12.725074 containerd[2803]: time="2025-07-07T02:17:12.725025091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"33b300c756bee0c065311b0497015e35d4c1407252a67343aba99c59534bf79d\" pid:10210 exited_at:{seconds:1751854632 nanos:724736119}" Jul 7 02:17:21.874653 containerd[2803]: time="2025-07-07T02:17:21.874614542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"0919aad819de8f49d9e4d07dfc9fedd90c7bddef36ab92ae6acecd4cde17fe5c\" pid:10248 exited_at:{seconds:1751854641 nanos:874466458}" Jul 7 02:17:25.450697 containerd[2803]: time="2025-07-07T02:17:25.450645737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"48442e539242b7d02042204722336c13a11151cd50bcce40622359a046ecf4c9\" pid:10285 exited_at:{seconds:1751854645 nanos:450464413}" Jul 7 02:17:33.679299 containerd[2803]: time="2025-07-07T02:17:33.679207201Z" level=info msg="StopPodSandbox for \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\"" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.709 [WARNING][10311] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.709 [INFO][10311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.709 [INFO][10311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" iface="eth0" netns="" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.709 [INFO][10311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.709 [INFO][10311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.726 [INFO][10334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.726 [INFO][10334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.726 [INFO][10334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.733 [WARNING][10334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.733 [INFO][10334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.734 [INFO][10334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:17:33.737809 containerd[2803]: 2025-07-07 02:17:33.736 [INFO][10311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.738217 containerd[2803]: time="2025-07-07T02:17:33.738190255Z" level=info msg="TearDown network for sandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" successfully" Jul 7 02:17:33.738275 containerd[2803]: time="2025-07-07T02:17:33.738262776Z" level=info msg="StopPodSandbox for \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" returns successfully" Jul 7 02:17:33.738703 containerd[2803]: time="2025-07-07T02:17:33.738638782Z" level=info msg="RemovePodSandbox for \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\"" Jul 7 02:17:33.738703 containerd[2803]: time="2025-07-07T02:17:33.738705904Z" level=info msg="Forcibly stopping sandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\"" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.767 [WARNING][10361] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" WorkloadEndpoint="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.767 [INFO][10361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.767 [INFO][10361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" iface="eth0" netns="" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.767 [INFO][10361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.767 [INFO][10361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.784 [INFO][10384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.784 [INFO][10384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.784 [INFO][10384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.791 [WARNING][10384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.792 [INFO][10384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" HandleID="k8s-pod-network.049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Workload="ci--4372.0.1--a--e89e5d604b-k8s-calico--apiserver--674744f469--4r7kc-eth0" Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.793 [INFO][10384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:17:33.796348 containerd[2803]: 2025-07-07 02:17:33.794 [INFO][10361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83" Jul 7 02:17:33.796847 containerd[2803]: time="2025-07-07T02:17:33.796824982Z" level=info msg="TearDown network for sandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" successfully" Jul 7 02:17:33.798430 containerd[2803]: time="2025-07-07T02:17:33.798406089Z" level=info msg="Ensure that sandbox 049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 in task-service has been cleanup successfully" Jul 7 02:17:33.799193 containerd[2803]: time="2025-07-07T02:17:33.799171102Z" level=info msg="RemovePodSandbox \"049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83\" returns successfully" Jul 7 02:17:38.369186 containerd[2803]: time="2025-07-07T02:17:38.369142071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"330af9ea9e4d86cfea9e07f40e1974cbdccf46c3c78fc16d868b1604a3bd7f0f\" pid:10415 exited_at:{seconds:1751854658 nanos:368918748}" Jul 7 02:17:42.702450 containerd[2803]: time="2025-07-07T02:17:42.702404835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"c21bbca118a7e064b00eca7e278f41c3df740ee540693809d0f8c908d09ed764\" pid:10460 exited_at:{seconds:1751854662 nanos:702194072}" Jul 7 02:17:55.447598 containerd[2803]: time="2025-07-07T02:17:55.447562069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"31fe1267a1008d9fc0217756e487b5a2fae4d95c5718408ac39d6f7320349fdb\" pid:10515 exited_at:{seconds:1751854675 nanos:447425828}" Jul 7 02:18:08.365535 containerd[2803]: time="2025-07-07T02:18:08.365436052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"5d6b6a8356d854b313ab5953835c7089554ca76d5d10925f991eda0cb95ce647\" pid:10554 exited_at:{seconds:1751854688 nanos:365161851}" Jul 7 02:18:11.095374 containerd[2803]: time="2025-07-07T02:18:11.095337064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"36a942fedb468836b1ffb449244938fdd8aede1f66bc02f8b229836386883ebc\" pid:10593 exited_at:{seconds:1751854691 nanos:95162744}" Jul 7 02:18:12.704932 containerd[2803]: time="2025-07-07T02:18:12.704894223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"0af008807e91b3ebdb1d8ab289759ecab3fd308d22509fdfc160f181241b5c24\" pid:10621 exited_at:{seconds:1751854692 nanos:704675702}" Jul 7 02:18:21.874014 containerd[2803]: time="2025-07-07T02:18:21.873953281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"ceebcc97377fd07e068cd6b9d6fa30c1e4c9a8365f32d89d026e7f6974e60a57\" pid:10685 exited_at:{seconds:1751854701 nanos:873667641}" Jul 7 02:18:25.450772 containerd[2803]: time="2025-07-07T02:18:25.450738361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"e51bd91994d4c2c0ee336cbb1f75603b110791bff3bb16a113c821b10cc7eafc\" pid:10724 exited_at:{seconds:1751854705 nanos:450589241}" Jul 7 02:18:38.371880 containerd[2803]: time="2025-07-07T02:18:38.371832172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"37a83201ec6937fe34a22ef45832008bbf5fcdeacd1102b9c6596c899db6adcf\" pid:10749 exited_at:{seconds:1751854718 nanos:371498653}" Jul 7 02:18:42.707948 containerd[2803]: time="2025-07-07T02:18:42.707918018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"7dc771830d8d5e5ca8844d26ff26d67f2ca49f23863fa3fa3f1519a7764cb9aa\" pid:10787 exited_at:{seconds:1751854722 nanos:707722539}" Jul 7 02:18:43.980762 systemd[1]: Started sshd@8-147.28.151.230:22-14.29.238.151:40448.service - OpenSSH per-connection server daemon (14.29.238.151:40448). Jul 7 02:18:46.467056 sshd[10817]: Received disconnect from 14.29.238.151 port 40448:11: Bye Bye [preauth] Jul 7 02:18:46.467056 sshd[10817]: Disconnected from authenticating user root 14.29.238.151 port 40448 [preauth] Jul 7 02:18:46.469246 systemd[1]: sshd@8-147.28.151.230:22-14.29.238.151:40448.service: Deactivated successfully. Jul 7 02:18:55.450630 containerd[2803]: time="2025-07-07T02:18:55.450589777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"63cebcde86c19f85649f8124d4206d6138cd4c4dbd82511f5c87fbb34dc1e685\" pid:10835 exited_at:{seconds:1751854735 nanos:450414938}" Jul 7 02:19:08.370772 containerd[2803]: time="2025-07-07T02:19:08.370727419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"5faf2681b6657d99786eca34e8f611b86bbfc3f57d07b7e88ef991415d113b30\" pid:10864 exited_at:{seconds:1751854748 nanos:370502617}" Jul 7 02:19:11.098433 containerd[2803]: time="2025-07-07T02:19:11.098388853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"dc4ff5fd734a215faf4b7b2602967e9933501eb30143d57d46ac3f4e9f8e2138\" pid:10899 exited_at:{seconds:1751854751 nanos:98217132}" Jul 7 02:19:12.698895 containerd[2803]: time="2025-07-07T02:19:12.698852732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"1817a0432eb2f93ceb821e59e69c09d8772024b607a0253a0528bee94b8605f8\" pid:10922 exited_at:{seconds:1751854752 nanos:698622530}" Jul 7 02:19:21.868016 containerd[2803]: time="2025-07-07T02:19:21.867968489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"0622f71417c251686b8c0fdddea68b54801e81878ef3522f608da17d5e73bf53\" pid:10958 exited_at:{seconds:1751854761 nanos:867760288}" Jul 7 02:19:25.447807 containerd[2803]: time="2025-07-07T02:19:25.447769564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"abbde157c5dbf20168e73a3d3e42f5a42ce5625c0efc1b485f8b577800373cfb\" pid:10995 exited_at:{seconds:1751854765 nanos:447584763}" Jul 7 02:19:29.288622 systemd[1]: Started sshd@9-147.28.151.230:22-180.76.151.217:34588.service - OpenSSH per-connection server daemon (180.76.151.217:34588). Jul 7 02:19:30.740598 sshd[11030]: Received disconnect from 180.76.151.217 port 34588:11: Bye Bye [preauth] Jul 7 02:19:30.740598 sshd[11030]: Disconnected from authenticating user root 180.76.151.217 port 34588 [preauth] Jul 7 02:19:30.742613 systemd[1]: sshd@9-147.28.151.230:22-180.76.151.217:34588.service: Deactivated successfully. Jul 7 02:19:38.371786 containerd[2803]: time="2025-07-07T02:19:38.371704431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"1761455682023631538bed0ebbf1e1d084f9f68434a6b53372a92f8b225d405f\" pid:11050 exited_at:{seconds:1751854778 nanos:371469751}" Jul 7 02:19:42.702822 containerd[2803]: time="2025-07-07T02:19:42.702791055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"fe6bcc115ddfc7fbafbe6be22736f2e6d318636d5331ec12596f8afdedc0070b\" pid:11087 exited_at:{seconds:1751854782 nanos:702568175}" Jul 7 02:19:55.446584 containerd[2803]: time="2025-07-07T02:19:55.446543937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"46f7589e1866a5a55e69ae9505e6d322c07bffa04e2ca6c3e5f14f17a30a2fda\" pid:11126 exited_at:{seconds:1751854795 nanos:446390337}" Jul 7 02:20:08.362043 containerd[2803]: time="2025-07-07T02:20:08.361997253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"5779dd39853e2b5b5c37ad17aceb27294cca298f3b1cfee53d7f163346df56ce\" pid:11155 exited_at:{seconds:1751854808 nanos:361769333}" Jul 7 02:20:11.098806 containerd[2803]: time="2025-07-07T02:20:11.098772024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"7bd4c764b8ceceeae4a91a7fe116e0b67ba08b704c985884c7f6cf10a79fb3d3\" pid:11187 exited_at:{seconds:1751854811 nanos:98609424}" Jul 7 02:20:12.704664 containerd[2803]: time="2025-07-07T02:20:12.704622694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"f3c79e8eebd6dae407464019aae392183d9e1314863bf739996987ebd7907847\" pid:11210 exited_at:{seconds:1751854812 nanos:704401974}" Jul 7 02:20:21.875882 containerd[2803]: time="2025-07-07T02:20:21.875838490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"3221ef8180317ed915e80dff15fd95c0bf23276135083c7327f91fd739b575e9\" pid:11258 exited_at:{seconds:1751854821 nanos:875484250}" Jul 7 02:20:25.447868 containerd[2803]: time="2025-07-07T02:20:25.447829935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"203743393e4e78aa4d32345a5a2c3bab4d4632ffdf97001d42a703f15937988c\" pid:11300 exited_at:{seconds:1751854825 nanos:447618815}" Jul 7 02:20:29.953348 containerd[2803]: time="2025-07-07T02:20:29.953280774Z" level=warning msg="container event discarded" container=ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee type=CONTAINER_CREATED_EVENT Jul 7 02:20:29.964517 containerd[2803]: time="2025-07-07T02:20:29.964487965Z" level=warning msg="container event discarded" container=ebe7b09ce226289e208389314bc0a2039064b46f7f3ccb7454adbe81ad2a09ee type=CONTAINER_STARTED_EVENT Jul 7 02:20:29.964517 containerd[2803]: time="2025-07-07T02:20:29.964511525Z" level=warning msg="container event discarded" container=4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342 type=CONTAINER_CREATED_EVENT Jul 7 02:20:29.964517 containerd[2803]: time="2025-07-07T02:20:29.964519365Z" level=warning msg="container event discarded" container=4cc8c1cdc7698e6d1607931a751fef2c8de9519a0d6e52c411e9f76154ebf342 type=CONTAINER_STARTED_EVENT Jul 7 02:20:29.964665 containerd[2803]: time="2025-07-07T02:20:29.964525605Z" level=warning msg="container event discarded" container=f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a type=CONTAINER_CREATED_EVENT Jul 7 02:20:29.964665 containerd[2803]: time="2025-07-07T02:20:29.964533964Z" level=warning msg="container event discarded" container=f905f880e8c8d975864558f8248ac747eefa4fbc4539fc905d37af6771f8a40a type=CONTAINER_STARTED_EVENT Jul 7 02:20:29.976032 containerd[2803]: time="2025-07-07T02:20:29.975977674Z" level=warning msg="container event discarded" container=510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb type=CONTAINER_CREATED_EVENT Jul 7 02:20:29.976032 containerd[2803]: time="2025-07-07T02:20:29.976001674Z" level=warning msg="container event discarded" container=3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208 type=CONTAINER_CREATED_EVENT Jul 7 02:20:29.976032 containerd[2803]: time="2025-07-07T02:20:29.976009714Z" level=warning msg="container event discarded" container=1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482 type=CONTAINER_CREATED_EVENT Jul 7 02:20:30.039226 containerd[2803]: time="2025-07-07T02:20:30.039198897Z" level=warning msg="container event discarded" container=510942b79dcffc99cfa83c4c6b660be244222a8095cb49a0a359a8b41b4309fb type=CONTAINER_STARTED_EVENT Jul 7 02:20:30.039226 containerd[2803]: time="2025-07-07T02:20:30.039215257Z" level=warning msg="container event discarded" container=1e322d2e4c4ac912d16cf5a183bc42c7e8b1a02ff114378e49b3440a6c8b1482 type=CONTAINER_STARTED_EVENT Jul 7 02:20:30.039226 containerd[2803]: time="2025-07-07T02:20:30.039222217Z" level=warning msg="container event discarded" container=3e3edc846a111750bfe12393503201e734fadbec54672fd55da93ab52ad55208 type=CONTAINER_STARTED_EVENT Jul 7 02:20:38.373060 containerd[2803]: time="2025-07-07T02:20:38.373016151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"edb01838ec2977f11c4b5da3aa9da1d24980613fff16afb19fd258e392bd5514\" pid:11331 exited_at:{seconds:1751854838 nanos:372752392}" Jul 7 02:20:41.690622 containerd[2803]: time="2025-07-07T02:20:41.690566898Z" level=warning msg="container event discarded" container=0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6 type=CONTAINER_CREATED_EVENT Jul 7 02:20:41.690622 containerd[2803]: time="2025-07-07T02:20:41.690612738Z" level=warning msg="container event discarded" container=0b06619b6890f53277d08ea2d0b75e6fce740272ead56308d9e81a1fe287bbb6 type=CONTAINER_STARTED_EVENT Jul 7 02:20:41.700760 containerd[2803]: time="2025-07-07T02:20:41.700721724Z" level=warning msg="container event discarded" container=94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74 type=CONTAINER_CREATED_EVENT Jul 7 02:20:41.767939 containerd[2803]: time="2025-07-07T02:20:41.767917510Z" level=warning msg="container event discarded" container=94489d45f456a9f768ed4fc9d12727e45a1940c424967720a78bc52b0fa2ba74 type=CONTAINER_STARTED_EVENT Jul 7 02:20:42.128567 containerd[2803]: time="2025-07-07T02:20:42.128527885Z" level=warning msg="container event discarded" container=56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3 type=CONTAINER_CREATED_EVENT Jul 7 02:20:42.128567 containerd[2803]: time="2025-07-07T02:20:42.128545245Z" level=warning msg="container event discarded" container=56e25eb1aff5afc4731ca8d5643f687a734ed2ef61e0bf9d2b82b2c05f49a2c3 type=CONTAINER_STARTED_EVENT Jul 7 02:20:42.698603 containerd[2803]: time="2025-07-07T02:20:42.698569711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"0624150e32ad06517d4acab66cbf3cd14c52cecf14af0f8f8b5bcc5a667661e4\" pid:11371 exited_at:{seconds:1751854842 nanos:698371392}" Jul 7 02:20:43.550891 containerd[2803]: time="2025-07-07T02:20:43.550854755Z" level=warning msg="container event discarded" container=906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0 type=CONTAINER_CREATED_EVENT Jul 7 02:20:43.602583 containerd[2803]: time="2025-07-07T02:20:43.602549519Z" level=warning msg="container event discarded" container=906886ed23953ceaf0ccb09ce8ac76df21bd8b018fe06a6d5e9fdc05128133e0 type=CONTAINER_STARTED_EVENT Jul 7 02:20:54.825005 containerd[2803]: time="2025-07-07T02:20:54.824955018Z" level=warning msg="container event discarded" container=5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c type=CONTAINER_CREATED_EVENT Jul 7 02:20:54.825005 containerd[2803]: time="2025-07-07T02:20:54.824986818Z" level=warning msg="container event discarded" container=5124a25ee288a7a28e5260f3f166513d4b6a0d09f4e861bfa25a703266ced09c type=CONTAINER_STARTED_EVENT Jul 7 02:20:55.135573 containerd[2803]: time="2025-07-07T02:20:55.135530882Z" level=warning msg="container event discarded" container=645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e type=CONTAINER_CREATED_EVENT Jul 7 02:20:55.135573 containerd[2803]: time="2025-07-07T02:20:55.135548362Z" level=warning msg="container event discarded" container=645cfe65fd2dea00f9c3750dc6d76a13e955deba743428de8ce996ceca352e1e type=CONTAINER_STARTED_EVENT Jul 7 02:20:55.454561 containerd[2803]: time="2025-07-07T02:20:55.454469964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"875f799992790cce71d9c50bd1977e7513726cf8857a39587e8f53850b2438c0\" pid:11407 exited_at:{seconds:1751854855 nanos:454303204}" Jul 7 02:20:55.523514 containerd[2803]: time="2025-07-07T02:20:55.523463235Z" level=warning msg="container event discarded" container=4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a type=CONTAINER_CREATED_EVENT Jul 7 02:20:55.582512 containerd[2803]: time="2025-07-07T02:20:55.582475404Z" level=warning msg="container event discarded" container=4e44498bf0304aaab6b1b453d8ae3c09a61fd8924ff84772070a09d55723942a type=CONTAINER_STARTED_EVENT Jul 7 02:20:55.864090 containerd[2803]: time="2025-07-07T02:20:55.864045556Z" level=warning msg="container event discarded" container=4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af type=CONTAINER_CREATED_EVENT Jul 7 02:20:55.915338 containerd[2803]: time="2025-07-07T02:20:55.915297260Z" level=warning msg="container event discarded" container=4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af type=CONTAINER_STARTED_EVENT Jul 7 02:20:56.235312 containerd[2803]: time="2025-07-07T02:20:56.235228493Z" level=warning msg="container event discarded" container=4190d67c80b6de9746dafb02d475490753b688e59de7aa136f2bd89d1ac963af type=CONTAINER_STOPPED_EVENT Jul 7 02:20:57.937901 containerd[2803]: time="2025-07-07T02:20:57.937859581Z" level=warning msg="container event discarded" container=eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4 type=CONTAINER_CREATED_EVENT Jul 7 02:20:57.997089 containerd[2803]: time="2025-07-07T02:20:57.997050786Z" level=warning msg="container event discarded" container=eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4 type=CONTAINER_STARTED_EVENT Jul 7 02:20:58.528130 containerd[2803]: time="2025-07-07T02:20:58.528093343Z" level=warning msg="container event discarded" container=eae4033bb8efa88046c4bf0ce9e70dbf51bcb2cac0a99cd3e8650cc52bd13ad4 type=CONTAINER_STOPPED_EVENT Jul 7 02:21:01.387831 containerd[2803]: time="2025-07-07T02:21:01.387772121Z" level=warning msg="container event discarded" container=762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05 type=CONTAINER_CREATED_EVENT Jul 7 02:21:01.447079 containerd[2803]: time="2025-07-07T02:21:01.447046560Z" level=warning msg="container event discarded" container=762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05 type=CONTAINER_STARTED_EVENT Jul 7 02:21:03.151392 containerd[2803]: time="2025-07-07T02:21:03.151346066Z" level=warning msg="container event discarded" container=d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67 type=CONTAINER_CREATED_EVENT Jul 7 02:21:03.151392 containerd[2803]: time="2025-07-07T02:21:03.151374586Z" level=warning msg="container event discarded" container=d54384ff17c406bc60cf98f10a02e81d67efcfc8e3d944a4c5aa0b24adbc6a67 type=CONTAINER_STARTED_EVENT Jul 7 02:21:03.610546 containerd[2803]: time="2025-07-07T02:21:03.610507258Z" level=warning msg="container event discarded" container=874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47 type=CONTAINER_CREATED_EVENT Jul 7 02:21:03.669179 containerd[2803]: time="2025-07-07T02:21:03.669149694Z" level=warning msg="container event discarded" container=874da14a1f8b56461c076ebb776693f2dd346bc9d895b2a924c112fa979f6c47 type=CONTAINER_STARTED_EVENT Jul 7 02:21:04.385950 containerd[2803]: time="2025-07-07T02:21:04.385894093Z" level=warning msg="container event discarded" container=6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e type=CONTAINER_CREATED_EVENT Jul 7 02:21:04.447145 containerd[2803]: time="2025-07-07T02:21:04.447106483Z" level=warning msg="container event discarded" container=6c9dc7768516616fc7fe516af06ed404f3083463f456de9b46bcbfb1b59ddc7e type=CONTAINER_STARTED_EVENT Jul 7 02:21:08.370860 containerd[2803]: time="2025-07-07T02:21:08.370818589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"9371cb017d01a16620fa8358f8ee8fd1970fea520448b8aaffbb38cd3882c552\" pid:11453 exited_at:{seconds:1751854868 nanos:370547629}" Jul 7 02:21:09.743451 containerd[2803]: time="2025-07-07T02:21:09.743414376Z" level=warning msg="container event discarded" container=b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074 type=CONTAINER_CREATED_EVENT Jul 7 02:21:09.743804 containerd[2803]: time="2025-07-07T02:21:09.743768656Z" level=warning msg="container event discarded" container=b9a0b574e8338ca25cbf3ec4514e57f94db74bf3aa85cb2f7ae5f360ae15d074 type=CONTAINER_STARTED_EVENT Jul 7 02:21:09.821751 containerd[2803]: time="2025-07-07T02:21:09.821709919Z" level=warning msg="container event discarded" container=79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc type=CONTAINER_CREATED_EVENT Jul 7 02:21:09.821751 containerd[2803]: time="2025-07-07T02:21:09.821745959Z" level=warning msg="container event discarded" container=79392330574c1003ae2ae9ca144164bae195a28858203650cd09e7f7a5ada2bc type=CONTAINER_STARTED_EVENT Jul 7 02:21:10.663262 containerd[2803]: time="2025-07-07T02:21:10.663235317Z" level=warning msg="container event discarded" container=f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e type=CONTAINER_CREATED_EVENT Jul 7 02:21:10.721517 containerd[2803]: time="2025-07-07T02:21:10.721469904Z" level=warning msg="container event discarded" container=f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e type=CONTAINER_STARTED_EVENT Jul 7 02:21:11.093550 containerd[2803]: time="2025-07-07T02:21:11.093511050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"84f50574348550d68e01a46a9932c1488433bf7072757d32e041d6949601f751\" pid:11490 exited_at:{seconds:1751854871 nanos:93366731}" Jul 7 02:21:11.543870 containerd[2803]: time="2025-07-07T02:21:11.543771929Z" level=warning msg="container event discarded" container=029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a type=CONTAINER_CREATED_EVENT Jul 7 02:21:11.603314 containerd[2803]: time="2025-07-07T02:21:11.603283191Z" level=warning msg="container event discarded" container=029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a type=CONTAINER_STARTED_EVENT Jul 7 02:21:11.704592 containerd[2803]: time="2025-07-07T02:21:11.704569397Z" level=warning msg="container event discarded" container=049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 type=CONTAINER_CREATED_EVENT Jul 7 02:21:11.704592 containerd[2803]: time="2025-07-07T02:21:11.704591317Z" level=warning msg="container event discarded" container=049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 type=CONTAINER_STARTED_EVENT Jul 7 02:21:11.830100 containerd[2803]: time="2025-07-07T02:21:11.830008626Z" level=warning msg="container event discarded" container=33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3 type=CONTAINER_CREATED_EVENT Jul 7 02:21:11.830100 containerd[2803]: time="2025-07-07T02:21:11.830032346Z" level=warning msg="container event discarded" container=33fffef88a5abda26363c7921aab68e4a0d477dab5515979b714ada3d65be1f3 type=CONTAINER_STARTED_EVENT Jul 7 02:21:11.830100 containerd[2803]: time="2025-07-07T02:21:11.830041506Z" level=warning msg="container event discarded" container=116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86 type=CONTAINER_CREATED_EVENT Jul 7 02:21:11.887240 containerd[2803]: time="2025-07-07T02:21:11.887210054Z" level=warning msg="container event discarded" container=116c38412713e3ed4232cc8ceeda12d5fc0620dbf95fae23da9925c8b10cad86 type=CONTAINER_STARTED_EVENT Jul 7 02:21:12.540804 containerd[2803]: time="2025-07-07T02:21:12.540769169Z" level=warning msg="container event discarded" container=1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e type=CONTAINER_CREATED_EVENT Jul 7 02:21:12.601050 containerd[2803]: time="2025-07-07T02:21:12.600999268Z" level=warning msg="container event discarded" container=1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e type=CONTAINER_STARTED_EVENT Jul 7 02:21:12.705028 containerd[2803]: time="2025-07-07T02:21:12.704996585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"1769a83516b63dd829a409d696519932104711260b159f23a74371289dcbc661\" pid:11513 exited_at:{seconds:1751854872 nanos:704795986}" Jul 7 02:21:12.712989 containerd[2803]: time="2025-07-07T02:21:12.712967607Z" level=warning msg="container event discarded" container=90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b type=CONTAINER_CREATED_EVENT Jul 7 02:21:12.712989 containerd[2803]: time="2025-07-07T02:21:12.712983207Z" level=warning msg="container event discarded" container=90e46763ce285c88ec52e4639bf491eed0a3b8cd3d1de652e8908831ba31f95b type=CONTAINER_STARTED_EVENT Jul 7 02:21:12.713077 containerd[2803]: time="2025-07-07T02:21:12.712991567Z" level=warning msg="container event discarded" container=42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a type=CONTAINER_CREATED_EVENT Jul 7 02:21:12.802337 containerd[2803]: time="2025-07-07T02:21:12.802262038Z" level=warning msg="container event discarded" container=42734c28764c947ce567e9706206911257f913a70c8cfb7cd606b7d6a241f71a type=CONTAINER_STARTED_EVENT Jul 7 02:21:12.819478 containerd[2803]: time="2025-07-07T02:21:12.819448638Z" level=warning msg="container event discarded" container=a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316 type=CONTAINER_CREATED_EVENT Jul 7 02:21:12.819478 containerd[2803]: time="2025-07-07T02:21:12.819466118Z" level=warning msg="container event discarded" container=a0ce05cd039f33386b07ba22978b1d95f0c69d75f485d943d2920105e225c316 type=CONTAINER_STARTED_EVENT Jul 7 02:21:13.203035 containerd[2803]: time="2025-07-07T02:21:13.203001537Z" level=warning msg="container event discarded" container=3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219 type=CONTAINER_CREATED_EVENT Jul 7 02:21:13.261326 containerd[2803]: time="2025-07-07T02:21:13.261292319Z" level=warning msg="container event discarded" container=3778060c18e1be126438273427c1cd79f624bd231aafb8c8172f44bb0c3de219 type=CONTAINER_STARTED_EVENT Jul 7 02:21:13.716626 containerd[2803]: time="2025-07-07T02:21:13.716552244Z" level=warning msg="container event discarded" container=52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f type=CONTAINER_CREATED_EVENT Jul 7 02:21:13.716626 containerd[2803]: time="2025-07-07T02:21:13.716616484Z" level=warning msg="container event discarded" container=52e25f15e9d682ca418a942f740ebca081ec141afc633f9f0922e36a260e408f type=CONTAINER_STARTED_EVENT Jul 7 02:21:13.727822 containerd[2803]: time="2025-07-07T02:21:13.727788098Z" level=warning msg="container event discarded" container=2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4 type=CONTAINER_CREATED_EVENT Jul 7 02:21:13.738986 containerd[2803]: time="2025-07-07T02:21:13.738964591Z" level=warning msg="container event discarded" container=06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3 type=CONTAINER_CREATED_EVENT Jul 7 02:21:13.782140 containerd[2803]: time="2025-07-07T02:21:13.782119809Z" level=warning msg="container event discarded" container=2df2641a1bd84c42a83e26c8c577ca445970328e9e8ed5fff88f8261f23ce8c4 type=CONTAINER_STARTED_EVENT Jul 7 02:21:13.782140 containerd[2803]: time="2025-07-07T02:21:13.782137089Z" level=warning msg="container event discarded" container=06423e9b362a430c03a2d94f9f16f46642324804f7cd9283c4da556f2c0a4ad3 type=CONTAINER_STARTED_EVENT Jul 7 02:21:15.738064 containerd[2803]: time="2025-07-07T02:21:15.738023855Z" level=warning msg="container event discarded" container=07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f type=CONTAINER_CREATED_EVENT Jul 7 02:21:15.738462 containerd[2803]: time="2025-07-07T02:21:15.738418974Z" level=warning msg="container event discarded" container=07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f type=CONTAINER_STARTED_EVENT Jul 7 02:21:15.738462 containerd[2803]: time="2025-07-07T02:21:15.738441134Z" level=warning msg="container event discarded" container=2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6 type=CONTAINER_CREATED_EVENT Jul 7 02:21:15.804648 containerd[2803]: time="2025-07-07T02:21:15.804610654Z" level=warning msg="container event discarded" container=2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6 type=CONTAINER_STARTED_EVENT Jul 7 02:21:20.428769 containerd[2803]: time="2025-07-07T02:21:20.428708417Z" level=warning msg="container event discarded" container=4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f type=CONTAINER_CREATED_EVENT Jul 7 02:21:20.428769 containerd[2803]: time="2025-07-07T02:21:20.428746297Z" level=warning msg="container event discarded" container=4beabd09797996f0e84ba3bcc112080d6e898a126caecdf17398943633251b6f type=CONTAINER_STARTED_EVENT Jul 7 02:21:20.473951 containerd[2803]: time="2025-07-07T02:21:20.473908023Z" level=warning msg="container event discarded" container=46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92 type=CONTAINER_CREATED_EVENT Jul 7 02:21:20.509149 containerd[2803]: time="2025-07-07T02:21:20.509101135Z" level=warning msg="container event discarded" container=2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6 type=CONTAINER_STOPPED_EVENT Jul 7 02:21:20.529335 containerd[2803]: time="2025-07-07T02:21:20.529291524Z" level=warning msg="container event discarded" container=46c36d0c3a50916a03b8ce3975b57371c83c8e154aee30c4c4059b62606fbd92 type=CONTAINER_STARTED_EVENT Jul 7 02:21:20.551463 containerd[2803]: time="2025-07-07T02:21:20.551415668Z" level=warning msg="container event discarded" container=07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f type=CONTAINER_STOPPED_EVENT Jul 7 02:21:21.875131 containerd[2803]: time="2025-07-07T02:21:21.875077880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"c6677e26895063bc29eae38acd3b1a67b57076b480526d6745bdda89b8591cb5\" pid:11553 exited_at:{seconds:1751854881 nanos:874857441}" Jul 7 02:21:25.438781 containerd[2803]: time="2025-07-07T02:21:25.438743564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"93b851e87a37f81286a3152453629c68e9c9d826d334f5ee20162968949565f9\" pid:11589 exited_at:{seconds:1751854885 nanos:438556805}" Jul 7 02:21:33.564352 containerd[2803]: time="2025-07-07T02:21:33.564288182Z" level=warning msg="container event discarded" container=2df37f1a3d1355680e2d85f2f4c6e0827d4222008411555c3f70d384655323b6 type=CONTAINER_DELETED_EVENT Jul 7 02:21:33.687233 containerd[2803]: time="2025-07-07T02:21:33.687189322Z" level=warning msg="container event discarded" container=07f5d8a354644ccf89e63890f189962c007006fbd0d267a6046fadff1de14a7f type=CONTAINER_DELETED_EVENT Jul 7 02:21:38.370735 containerd[2803]: time="2025-07-07T02:21:38.370679190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"9838ddc8ad948bf9411a9be6d4e8aaa1a34fc9eaa6c88bbc3974d1fa4b057709\" pid:11615 exited_at:{seconds:1751854898 nanos:370434751}" Jul 7 02:21:42.711192 containerd[2803]: time="2025-07-07T02:21:42.711146550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"89d665676ecf038bd61026cc05f33fe8f456662c99c256123dcfd8d1c911c5c6\" pid:11661 exited_at:{seconds:1751854902 nanos:710935831}" Jul 7 02:21:49.804900 containerd[2803]: time="2025-07-07T02:21:49.804842844Z" level=warning msg="container event discarded" container=1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e type=CONTAINER_STOPPED_EVENT Jul 7 02:21:49.857475 containerd[2803]: time="2025-07-07T02:21:49.857446766Z" level=warning msg="container event discarded" container=049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 type=CONTAINER_STOPPED_EVENT Jul 7 02:21:50.731556 containerd[2803]: time="2025-07-07T02:21:50.731519452Z" level=warning msg="container event discarded" container=1e084b5cb39c267f030cea7d6e9cf3ee7dec43cd002700078ff31287a4d1563e type=CONTAINER_DELETED_EVENT Jul 7 02:21:55.445505 containerd[2803]: time="2025-07-07T02:21:55.445459718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"89ad195f667d2b15ffe68d49f96d7728d86aa55184d3c64ddf84745214ede93a\" pid:11706 exited_at:{seconds:1751854915 nanos:445252798}" Jul 7 02:22:08.363878 containerd[2803]: time="2025-07-07T02:22:08.363820409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"3eaaa2a6dc29dea99b5f4891b2d4a7ea167780e69316599f7b367b090ae2ca39\" pid:11731 exited_at:{seconds:1751854928 nanos:363541449}" Jul 7 02:22:11.101741 containerd[2803]: time="2025-07-07T02:22:11.101704758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"fe1b199d5a71070dad16905d3453392180ae1bc2bbb42db936e1eb38728c3ecc\" pid:11768 exited_at:{seconds:1751854931 nanos:101511959}" Jul 7 02:22:12.705992 containerd[2803]: time="2025-07-07T02:22:12.705944299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"fdec2e0bcedc1cca2f666b0d1882cba3f0340e432ed30bdc627fa6f5eefb21ba\" pid:11792 exited_at:{seconds:1751854932 nanos:705755099}" Jul 7 02:22:21.881516 containerd[2803]: time="2025-07-07T02:22:21.881481370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"6d7b14588d6259cab6da6f6687bbe601e967f9d12e28a2ab22d86b9c371c4faf\" pid:11833 exited_at:{seconds:1751854941 nanos:881303611}" Jul 7 02:22:25.448562 containerd[2803]: time="2025-07-07T02:22:25.448527888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"e85033df8dbfa1d6aa2dfa086eb4f138621c614879ceb2e5b26f3360b7ce3c6e\" pid:11867 exited_at:{seconds:1751854945 nanos:448352569}" Jul 7 02:22:33.809808 containerd[2803]: time="2025-07-07T02:22:33.809701524Z" level=warning msg="container event discarded" container=049fdbf0cb5f6a5f47741f1440148feb4ba28c00202074d4a4a63f29d8e23a83 type=CONTAINER_DELETED_EVENT Jul 7 02:22:38.369210 containerd[2803]: time="2025-07-07T02:22:38.369172516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"273d2ebd4a62b5175e8b38c564fc823729b008da7c8db59b08ff92554ec41e1a\" pid:11921 exited_at:{seconds:1751854958 nanos:368882277}" Jul 7 02:22:42.706856 containerd[2803]: time="2025-07-07T02:22:42.706819793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"7c4a695624f952b6893a692c1df1126b922da39ed0354299c21efacc54185b4b\" pid:11956 exited_at:{seconds:1751854962 nanos:706604034}" Jul 7 02:22:48.155845 systemd[1]: Started sshd@10-147.28.151.230:22-80.94.95.116:26262.service - OpenSSH per-connection server daemon (80.94.95.116:26262). Jul 7 02:22:51.097600 sshd[11980]: Connection closed by authenticating user operator 80.94.95.116 port 26262 [preauth] Jul 7 02:22:51.099767 systemd[1]: sshd@10-147.28.151.230:22-80.94.95.116:26262.service: Deactivated successfully. Jul 7 02:22:53.592629 systemd[1]: Started sshd@11-147.28.151.230:22-180.76.151.217:54942.service - OpenSSH per-connection server daemon (180.76.151.217:54942). Jul 7 02:22:54.849024 sshd[11987]: Invalid user wialon from 180.76.151.217 port 54942 Jul 7 02:22:55.082125 sshd[11987]: Received disconnect from 180.76.151.217 port 54942:11: Bye Bye [preauth] Jul 7 02:22:55.082125 sshd[11987]: Disconnected from invalid user wialon 180.76.151.217 port 54942 [preauth] Jul 7 02:22:55.084231 systemd[1]: sshd@11-147.28.151.230:22-180.76.151.217:54942.service: Deactivated successfully. Jul 7 02:22:55.448704 containerd[2803]: time="2025-07-07T02:22:55.448664966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"f93fe57739dee11ac8c7e91ba784b24de7efa502af562e9b2f67adb152777114\" pid:12003 exited_at:{seconds:1751854975 nanos:448479847}" Jul 7 02:23:08.369990 containerd[2803]: time="2025-07-07T02:23:08.369945088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"cf2ef4f9777053c3bc5fbf7f2e666cfea6e8e22c36f3b81557dba58c69f58e5e\" pid:12025 exited_at:{seconds:1751854988 nanos:369673929}" Jul 7 02:23:11.101434 containerd[2803]: time="2025-07-07T02:23:11.101394316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"59d063cec170fac3e3588c954c9752664b5acf599035b6bd33db05e14e8bc3bd\" pid:12057 exited_at:{seconds:1751854991 nanos:101213116}" Jul 7 02:23:12.703958 containerd[2803]: time="2025-07-07T02:23:12.703914523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"9b785af76ed1fe654ae4c841792c0d86d65836cefbbbb99c9f94a2599fda84d7\" pid:12080 exited_at:{seconds:1751854992 nanos:703713483}" Jul 7 02:23:21.881063 containerd[2803]: time="2025-07-07T02:23:21.881010490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"b24f064c60c243fd19914e3c53b0f0f0b8e67e7b9b8d42219882178d52818412\" pid:12122 exited_at:{seconds:1751855001 nanos:880525002}" Jul 7 02:23:25.448698 containerd[2803]: time="2025-07-07T02:23:25.448633576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"c26dd607a55f3e4bd63fd663127148fa0c3458fbf193d1e99897ba682e2610ed\" pid:12160 exited_at:{seconds:1751855005 nanos:448461014}" Jul 7 02:23:37.116618 systemd[1]: Started sshd@12-147.28.151.230:22-14.29.238.151:59308.service - OpenSSH per-connection server daemon (14.29.238.151:59308). Jul 7 02:23:38.363495 containerd[2803]: time="2025-07-07T02:23:38.363458717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"016997836e4ca1acab11a747d1e46e6dc654dc40e78949bb2f0547f9ba969521\" pid:12186 exited_at:{seconds:1751855018 nanos:363251834}" Jul 7 02:23:39.144606 systemd[1]: Started sshd@13-147.28.151.230:22-139.178.68.195:50204.service - OpenSSH per-connection server daemon (139.178.68.195:50204). Jul 7 02:23:39.552444 sshd[12214]: Accepted publickey for core from 139.178.68.195 port 50204 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:23:39.553659 sshd-session[12214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:23:39.557411 systemd-logind[2788]: New session 10 of user core. Jul 7 02:23:39.579844 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 02:23:39.910172 sshd[12216]: Connection closed by 139.178.68.195 port 50204 Jul 7 02:23:39.910525 sshd-session[12214]: pam_unix(sshd:session): session closed for user core Jul 7 02:23:39.913596 systemd[1]: sshd@13-147.28.151.230:22-139.178.68.195:50204.service: Deactivated successfully. Jul 7 02:23:39.915838 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 02:23:39.916456 systemd-logind[2788]: Session 10 logged out. Waiting for processes to exit. Jul 7 02:23:39.917282 systemd-logind[2788]: Removed session 10. Jul 7 02:23:42.705282 containerd[2803]: time="2025-07-07T02:23:42.705247891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"31e0100942d0b5ac2d17057ae0e97d699f4ce86f6ffb4018e3a938a2f7b8c7ad\" pid:12266 exited_at:{seconds:1751855022 nanos:705053248}" Jul 7 02:23:44.994574 systemd[1]: Started sshd@14-147.28.151.230:22-139.178.68.195:50206.service - OpenSSH per-connection server daemon (139.178.68.195:50206). Jul 7 02:23:45.396885 sshd[12295]: Accepted publickey for core from 139.178.68.195 port 50206 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:23:45.398253 sshd-session[12295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:23:45.401660 systemd-logind[2788]: New session 11 of user core. Jul 7 02:23:45.418792 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 02:23:45.743838 sshd[12298]: Connection closed by 139.178.68.195 port 50206 Jul 7 02:23:45.744148 sshd-session[12295]: pam_unix(sshd:session): session closed for user core Jul 7 02:23:45.747360 systemd[1]: sshd@14-147.28.151.230:22-139.178.68.195:50206.service: Deactivated successfully. Jul 7 02:23:45.749629 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 02:23:45.750235 systemd-logind[2788]: Session 11 logged out. Waiting for processes to exit. Jul 7 02:23:45.751084 systemd-logind[2788]: Removed session 11. Jul 7 02:23:45.822593 systemd[1]: Started sshd@15-147.28.151.230:22-139.178.68.195:50222.service - OpenSSH per-connection server daemon (139.178.68.195:50222). Jul 7 02:23:46.231699 sshd[12340]: Accepted publickey for core from 139.178.68.195 port 50222 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:23:46.233066 sshd-session[12340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:23:46.236638 systemd-logind[2788]: New session 12 of user core. Jul 7 02:23:46.254849 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 02:23:46.608533 sshd[12342]: Connection closed by 139.178.68.195 port 50222 Jul 7 02:23:46.608918 sshd-session[12340]: pam_unix(sshd:session): session closed for user core Jul 7 02:23:46.612223 systemd[1]: sshd@15-147.28.151.230:22-139.178.68.195:50222.service: Deactivated successfully. Jul 7 02:23:46.614325 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 02:23:46.614960 systemd-logind[2788]: Session 12 logged out. Waiting for processes to exit. Jul 7 02:23:46.615868 systemd-logind[2788]: Removed session 12. Jul 7 02:23:46.693446 systemd[1]: Started sshd@16-147.28.151.230:22-139.178.68.195:50232.service - OpenSSH per-connection server daemon (139.178.68.195:50232). Jul 7 02:23:47.126909 sshd[12379]: Accepted publickey for core from 139.178.68.195 port 50232 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:23:47.128069 sshd-session[12379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:23:47.131171 systemd-logind[2788]: New session 13 of user core. Jul 7 02:23:47.146782 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 02:23:47.480811 sshd[12381]: Connection closed by 139.178.68.195 port 50232 Jul 7 02:23:47.481130 sshd-session[12379]: pam_unix(sshd:session): session closed for user core Jul 7 02:23:47.484211 systemd[1]: sshd@16-147.28.151.230:22-139.178.68.195:50232.service: Deactivated successfully. Jul 7 02:23:47.485891 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 02:23:47.486482 systemd-logind[2788]: Session 13 logged out. Waiting for processes to exit. Jul 7 02:23:47.487300 systemd-logind[2788]: Removed session 13. Jul 7 02:23:52.561647 systemd[1]: Started sshd@17-147.28.151.230:22-139.178.68.195:37750.service - OpenSSH per-connection server daemon (139.178.68.195:37750). Jul 7 02:23:52.974529 sshd[12421]: Accepted publickey for core from 139.178.68.195 port 37750 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:23:52.975728 sshd-session[12421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:23:52.979028 systemd-logind[2788]: New session 14 of user core. Jul 7 02:23:53.002787 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 02:23:53.330713 sshd[12423]: Connection closed by 139.178.68.195 port 37750 Jul 7 02:23:53.331010 sshd-session[12421]: pam_unix(sshd:session): session closed for user core Jul 7 02:23:53.334118 systemd[1]: sshd@17-147.28.151.230:22-139.178.68.195:37750.service: Deactivated successfully. Jul 7 02:23:53.335787 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 02:23:53.336378 systemd-logind[2788]: Session 14 logged out. Waiting for processes to exit. Jul 7 02:23:53.337238 systemd-logind[2788]: Removed session 14. Jul 7 02:23:55.446394 containerd[2803]: time="2025-07-07T02:23:55.446361150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"a7f55c484cfb8955bdbb13e31508b092917b9465c758444830ce354e845aa083\" pid:12467 exited_at:{seconds:1751855035 nanos:446170707}" Jul 7 02:23:58.405653 systemd[1]: Started sshd@18-147.28.151.230:22-139.178.68.195:58806.service - OpenSSH per-connection server daemon (139.178.68.195:58806). Jul 7 02:23:58.808205 sshd[12478]: Accepted publickey for core from 139.178.68.195 port 58806 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:23:58.809357 sshd-session[12478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:23:58.812778 systemd-logind[2788]: New session 15 of user core. Jul 7 02:23:58.835784 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 02:23:59.155446 sshd[12480]: Connection closed by 139.178.68.195 port 58806 Jul 7 02:23:59.155832 sshd-session[12478]: pam_unix(sshd:session): session closed for user core Jul 7 02:23:59.158991 systemd[1]: sshd@18-147.28.151.230:22-139.178.68.195:58806.service: Deactivated successfully. Jul 7 02:23:59.161237 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 02:23:59.161850 systemd-logind[2788]: Session 15 logged out. Waiting for processes to exit. Jul 7 02:23:59.162675 systemd-logind[2788]: Removed session 15. Jul 7 02:24:04.232690 systemd[1]: Started sshd@19-147.28.151.230:22-139.178.68.195:58816.service - OpenSSH per-connection server daemon (139.178.68.195:58816). Jul 7 02:24:04.634802 sshd[12516]: Accepted publickey for core from 139.178.68.195 port 58816 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:04.635996 sshd-session[12516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:04.639250 systemd-logind[2788]: New session 16 of user core. Jul 7 02:24:04.665840 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 02:24:04.984835 sshd[12518]: Connection closed by 139.178.68.195 port 58816 Jul 7 02:24:04.985138 sshd-session[12516]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:04.988271 systemd[1]: sshd@19-147.28.151.230:22-139.178.68.195:58816.service: Deactivated successfully. Jul 7 02:24:04.990513 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 02:24:04.991125 systemd-logind[2788]: Session 16 logged out. Waiting for processes to exit. Jul 7 02:24:04.991959 systemd-logind[2788]: Removed session 16. Jul 7 02:24:05.067507 systemd[1]: Started sshd@20-147.28.151.230:22-139.178.68.195:58824.service - OpenSSH per-connection server daemon (139.178.68.195:58824). Jul 7 02:24:05.467403 sshd[12556]: Accepted publickey for core from 139.178.68.195 port 58824 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:05.468646 sshd-session[12556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:05.471960 systemd-logind[2788]: New session 17 of user core. Jul 7 02:24:05.496843 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 02:24:05.858418 sshd[12558]: Connection closed by 139.178.68.195 port 58824 Jul 7 02:24:05.858751 sshd-session[12556]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:05.861808 systemd[1]: sshd@20-147.28.151.230:22-139.178.68.195:58824.service: Deactivated successfully. Jul 7 02:24:05.864040 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 02:24:05.864629 systemd-logind[2788]: Session 17 logged out. Waiting for processes to exit. Jul 7 02:24:05.865483 systemd-logind[2788]: Removed session 17. Jul 7 02:24:05.943475 systemd[1]: Started sshd@21-147.28.151.230:22-139.178.68.195:58834.service - OpenSSH per-connection server daemon (139.178.68.195:58834). Jul 7 02:24:06.351450 sshd[12591]: Accepted publickey for core from 139.178.68.195 port 58834 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:06.352636 sshd-session[12591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:06.355724 systemd-logind[2788]: New session 18 of user core. Jul 7 02:24:06.379794 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 02:24:07.050123 sshd[12610]: Connection closed by 139.178.68.195 port 58834 Jul 7 02:24:07.050488 sshd-session[12591]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:07.053751 systemd[1]: sshd@21-147.28.151.230:22-139.178.68.195:58834.service: Deactivated successfully. Jul 7 02:24:07.056031 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 02:24:07.056265 systemd[1]: session-18.scope: Consumed 539ms CPU time, 66M memory peak. Jul 7 02:24:07.056656 systemd-logind[2788]: Session 18 logged out. Waiting for processes to exit. Jul 7 02:24:07.057503 systemd-logind[2788]: Removed session 18. Jul 7 02:24:07.124437 systemd[1]: Started sshd@22-147.28.151.230:22-139.178.68.195:58846.service - OpenSSH per-connection server daemon (139.178.68.195:58846). Jul 7 02:24:07.529974 sshd[12663]: Accepted publickey for core from 139.178.68.195 port 58846 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:07.531191 sshd-session[12663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:07.534414 systemd-logind[2788]: New session 19 of user core. Jul 7 02:24:07.553787 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 02:24:07.963622 sshd[12665]: Connection closed by 139.178.68.195 port 58846 Jul 7 02:24:07.963949 sshd-session[12663]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:07.966939 systemd[1]: sshd@22-147.28.151.230:22-139.178.68.195:58846.service: Deactivated successfully. Jul 7 02:24:07.969309 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 02:24:07.970087 systemd-logind[2788]: Session 19 logged out. Waiting for processes to exit. Jul 7 02:24:07.971437 systemd-logind[2788]: Removed session 19. Jul 7 02:24:08.044430 systemd[1]: Started sshd@23-147.28.151.230:22-139.178.68.195:58848.service - OpenSSH per-connection server daemon (139.178.68.195:58848). Jul 7 02:24:08.373335 containerd[2803]: time="2025-07-07T02:24:08.373292759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"762e7a628dec4e5555ddadcd8b388c58dc477edaf49cdfe033d09bbb40bd9f05\" id:\"8ea28083c39bc4c6e25383cfa2004ad3bdaa483b723d86aa9c0c13c13eecbf99\" pid:12731 exited_at:{seconds:1751855048 nanos:373024276}" Jul 7 02:24:08.446843 sshd[12718]: Accepted publickey for core from 139.178.68.195 port 58848 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:08.448180 sshd-session[12718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:08.451531 systemd-logind[2788]: New session 20 of user core. Jul 7 02:24:08.471797 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 02:24:08.794587 sshd[12755]: Connection closed by 139.178.68.195 port 58848 Jul 7 02:24:08.794883 sshd-session[12718]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:08.797941 systemd[1]: sshd@23-147.28.151.230:22-139.178.68.195:58848.service: Deactivated successfully. Jul 7 02:24:08.799629 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 02:24:08.800247 systemd-logind[2788]: Session 20 logged out. Waiting for processes to exit. Jul 7 02:24:08.801084 systemd-logind[2788]: Removed session 20. Jul 7 02:24:11.105576 containerd[2803]: time="2025-07-07T02:24:11.105539928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"2ec7b4888ed509593e03cb054bb50b04de47e36244039e37221518b931610c62\" pid:12806 exited_at:{seconds:1751855051 nanos:105386366}" Jul 7 02:24:12.708564 containerd[2803]: time="2025-07-07T02:24:12.708528426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"ab1b0dbe70e45c6f01fea09861b2cc9dc104b591602cefe633f5a22c09129326\" pid:12830 exited_at:{seconds:1751855052 nanos:708333184}" Jul 7 02:24:13.875718 systemd[1]: Started sshd@24-147.28.151.230:22-139.178.68.195:45730.service - OpenSSH per-connection server daemon (139.178.68.195:45730). Jul 7 02:24:14.286912 sshd[12857]: Accepted publickey for core from 139.178.68.195 port 45730 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:14.288126 sshd-session[12857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:14.291644 systemd-logind[2788]: New session 21 of user core. Jul 7 02:24:14.318869 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 02:24:14.637306 sshd[12859]: Connection closed by 139.178.68.195 port 45730 Jul 7 02:24:14.637687 sshd-session[12857]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:14.640778 systemd[1]: sshd@24-147.28.151.230:22-139.178.68.195:45730.service: Deactivated successfully. Jul 7 02:24:14.643055 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 02:24:14.643702 systemd-logind[2788]: Session 21 logged out. Waiting for processes to exit. Jul 7 02:24:14.644543 systemd-logind[2788]: Removed session 21. Jul 7 02:24:19.717644 systemd[1]: Started sshd@25-147.28.151.230:22-139.178.68.195:35346.service - OpenSSH per-connection server daemon (139.178.68.195:35346). Jul 7 02:24:20.125956 sshd[12897]: Accepted publickey for core from 139.178.68.195 port 35346 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:20.127247 sshd-session[12897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:20.130643 systemd-logind[2788]: New session 22 of user core. Jul 7 02:24:20.145792 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 02:24:20.476339 sshd[12899]: Connection closed by 139.178.68.195 port 35346 Jul 7 02:24:20.476619 sshd-session[12897]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:20.479675 systemd[1]: sshd@25-147.28.151.230:22-139.178.68.195:35346.service: Deactivated successfully. Jul 7 02:24:20.482262 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 02:24:20.482878 systemd-logind[2788]: Session 22 logged out. Waiting for processes to exit. Jul 7 02:24:20.483697 systemd-logind[2788]: Removed session 22. Jul 7 02:24:21.879399 containerd[2803]: time="2025-07-07T02:24:21.879362169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f328ba1b9fe076dfbe8437220965196961b7d1093a98bff5ea038f816495ca7e\" id:\"1ef820243aeb82c1cc7c25456fcb46fa00a66aa6a6bb46b2eefdc2a7ca1d563a\" pid:12944 exited_at:{seconds:1751855061 nanos:879182287}" Jul 7 02:24:25.447435 containerd[2803]: time="2025-07-07T02:24:25.447398298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"029a9d4fec6a723c9671101c502a7bed52470da3939059a0d8c8aae5267ca56a\" id:\"2f958a84af509e1485dde2256f451b925059210c91e457e999fb9950ae52ab55\" pid:12982 exited_at:{seconds:1751855065 nanos:447189416}" Jul 7 02:24:25.554578 systemd[1]: Started sshd@26-147.28.151.230:22-139.178.68.195:35354.service - OpenSSH per-connection server daemon (139.178.68.195:35354). Jul 7 02:24:25.967343 sshd[12993]: Accepted publickey for core from 139.178.68.195 port 35354 ssh2: RSA SHA256:0xkyFWbXKjN9YAAM+kHkxcgbLPf9sWmbw8KDci4CWPE Jul 7 02:24:25.968631 sshd-session[12993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:24:25.975283 systemd-logind[2788]: New session 23 of user core. Jul 7 02:24:25.995795 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 02:24:26.325814 sshd[12995]: Connection closed by 139.178.68.195 port 35354 Jul 7 02:24:26.326168 sshd-session[12993]: pam_unix(sshd:session): session closed for user core Jul 7 02:24:26.329291 systemd[1]: sshd@26-147.28.151.230:22-139.178.68.195:35354.service: Deactivated successfully. Jul 7 02:24:26.331553 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 02:24:26.332158 systemd-logind[2788]: Session 23 logged out. Waiting for processes to exit. Jul 7 02:24:26.332974 systemd-logind[2788]: Removed session 23.