May 8 00:27:56.183990 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 8 00:27:56.184012 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 8 00:27:56.184020 kernel: KASLR enabled May 8 00:27:56.184026 kernel: efi: EFI v2.7 by American Megatrends May 8 00:27:56.184032 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47c818 RNG=0xebf10018 MEMRESERVE=0xe465af98 May 8 00:27:56.184037 kernel: random: crng init done May 8 00:27:56.184044 kernel: secureboot: Secure boot disabled May 8 00:27:56.184056 kernel: esrt: Reserving ESRT space from 0x00000000ea47c818 to 0x00000000ea47c878. May 8 00:27:56.184064 kernel: ACPI: Early table checksum verification disabled May 8 00:27:56.184070 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 8 00:27:56.184076 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 8 00:27:56.184082 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 8 00:27:56.184088 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 8 00:27:56.184094 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 8 00:27:56.184102 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 8 00:27:56.184108 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 8 00:27:56.184114 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 8 00:27:56.184120 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 8 00:27:56.184127 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 8 00:27:56.184133 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 8 00:27:56.184139 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 8 00:27:56.184145 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 8 00:27:56.184151 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 8 00:27:56.184157 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 8 00:27:56.184165 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 8 00:27:56.184171 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 8 00:27:56.184177 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 8 00:27:56.184183 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 8 00:27:56.184189 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 8 00:27:56.184196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 8 00:27:56.184202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 8 00:27:56.184208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 8 00:27:56.184214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 8 00:27:56.184220 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 8 00:27:56.184226 kernel: Zone ranges: May 8 00:27:56.184233 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 8 00:27:56.184239 kernel: DMA32 empty May 8 00:27:56.184246 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 8 00:27:56.184252 kernel: Movable zone start for each node May 8 00:27:56.184258 kernel: Early memory node ranges May 8 00:27:56.184267 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 8 00:27:56.184273 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 8 00:27:56.184281 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 8 00:27:56.184288 kernel: node 0: [mem 0x0000000094000000-0x00000000eba32fff] May 8 00:27:56.184294 kernel: node 0: [mem 0x00000000eba33000-0x00000000ebeb4fff] May 8 00:27:56.184301 kernel: node 0: [mem 0x00000000ebeb5000-0x00000000ebeb9fff] May 8 00:27:56.184307 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] May 8 00:27:56.184314 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 8 00:27:56.184320 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 8 00:27:56.184326 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 8 00:27:56.184333 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 8 00:27:56.184339 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] May 8 00:27:56.184347 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 8 00:27:56.184354 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 8 00:27:56.184360 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 8 00:27:56.184367 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 8 00:27:56.184373 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 8 00:27:56.184380 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 8 00:27:56.184386 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 8 00:27:56.184392 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 8 00:27:56.184399 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 8 00:27:56.184406 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 8 00:27:56.184412 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 8 00:27:56.184420 kernel: psci: probing for conduit method from ACPI. May 8 00:27:56.184426 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:27:56.184433 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:27:56.184439 kernel: psci: MIGRATE_INFO_TYPE not supported. May 8 00:27:56.184446 kernel: psci: SMC Calling Convention v1.2 May 8 00:27:56.184452 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 8 00:27:56.184459 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 8 00:27:56.184465 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 8 00:27:56.184472 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 8 00:27:56.184478 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 8 00:27:56.184485 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 8 00:27:56.184491 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 8 00:27:56.184499 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 8 00:27:56.184506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 8 00:27:56.184512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 8 00:27:56.184519 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 8 00:27:56.184525 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 8 00:27:56.184532 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 8 00:27:56.184538 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 8 00:27:56.184545 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 8 00:27:56.184551 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 8 00:27:56.184558 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 8 00:27:56.184564 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 8 00:27:56.184571 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 8 00:27:56.184578 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 8 00:27:56.184585 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 8 00:27:56.184591 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 8 00:27:56.184598 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 8 00:27:56.184604 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 8 00:27:56.184610 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 8 00:27:56.184617 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 8 00:27:56.184623 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 8 00:27:56.184630 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 8 00:27:56.184636 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 8 00:27:56.184643 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 8 00:27:56.184650 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 8 00:27:56.184657 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 8 00:27:56.184663 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 8 00:27:56.184670 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 8 00:27:56.184676 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 8 00:27:56.184682 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 8 00:27:56.184689 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 8 00:27:56.184695 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 8 00:27:56.184702 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 8 00:27:56.184708 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 8 00:27:56.184714 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 8 00:27:56.184721 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 8 00:27:56.184728 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 8 00:27:56.184735 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 8 00:27:56.184741 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 8 00:27:56.184747 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 8 00:27:56.184754 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 8 00:27:56.184760 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 8 00:27:56.184767 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 8 00:27:56.184773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 8 00:27:56.184786 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 8 00:27:56.184793 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 8 00:27:56.184801 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 8 00:27:56.184808 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 8 00:27:56.184815 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 8 00:27:56.184822 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 8 00:27:56.184828 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 8 00:27:56.184835 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 8 00:27:56.184843 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 8 00:27:56.184850 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 8 00:27:56.184857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 8 00:27:56.184864 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 8 00:27:56.184871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 8 00:27:56.184878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 8 00:27:56.184884 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 8 00:27:56.184891 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 8 00:27:56.184898 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 8 00:27:56.184905 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 8 00:27:56.184911 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 8 00:27:56.184918 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 8 00:27:56.184926 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 8 00:27:56.184933 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 8 00:27:56.184940 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 8 00:27:56.184947 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 8 00:27:56.184954 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 8 00:27:56.184960 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 8 00:27:56.184967 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 8 00:27:56.184974 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 8 00:27:56.184981 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 8 00:27:56.184988 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 8 00:27:56.184994 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:27:56.185003 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:27:56.185010 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 8 00:27:56.185017 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 8 00:27:56.185024 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 8 00:27:56.185031 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 8 00:27:56.185037 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 8 00:27:56.185044 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 8 00:27:56.185055 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 8 00:27:56.185062 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 8 00:27:56.185069 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 8 00:27:56.185075 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 8 00:27:56.185084 kernel: Detected PIPT I-cache on CPU0 May 8 00:27:56.185091 kernel: CPU features: detected: GIC system register CPU interface May 8 00:27:56.185098 kernel: CPU features: detected: Virtualization Host Extensions May 8 00:27:56.185104 kernel: CPU features: detected: Hardware dirty bit management May 8 00:27:56.185111 kernel: CPU features: detected: Spectre-v4 May 8 00:27:56.185118 kernel: CPU features: detected: Spectre-BHB May 8 00:27:56.185125 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:27:56.185132 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:27:56.185139 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:27:56.185146 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:27:56.185153 kernel: alternatives: applying boot alternatives May 8 00:27:56.185161 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 8 00:27:56.185169 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:27:56.185176 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:27:56.185183 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 8 00:27:56.185190 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:27:56.185197 kernel: printk: log_buf_len: 1048576 bytes May 8 00:27:56.185204 kernel: printk: early log buf free: 249864(95%) May 8 00:27:56.185211 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 8 00:27:56.185218 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 8 00:27:56.185225 kernel: Fallback order for Node 0: 0 May 8 00:27:56.185232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 8 00:27:56.185240 kernel: Policy zone: Normal May 8 00:27:56.185247 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:27:56.185254 kernel: software IO TLB: area num 128. May 8 00:27:56.185261 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 8 00:27:56.185268 kernel: Memory: 262923416K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 5250920K reserved, 0K cma-reserved) May 8 00:27:56.185275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 8 00:27:56.185282 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:27:56.185290 kernel: rcu: RCU event tracing is enabled. May 8 00:27:56.185297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 8 00:27:56.185304 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:27:56.185311 kernel: Tracing variant of Tasks RCU enabled. May 8 00:27:56.185318 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:27:56.185327 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 8 00:27:56.185334 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:27:56.185341 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 8 00:27:56.185348 kernel: GICv3: 672 SPIs implemented May 8 00:27:56.185355 kernel: GICv3: 0 Extended SPIs implemented May 8 00:27:56.185361 kernel: Root IRQ handler: gic_handle_irq May 8 00:27:56.185368 kernel: GICv3: GICv3 features: 16 PPIs May 8 00:27:56.185375 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 8 00:27:56.185382 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 8 00:27:56.185389 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 8 00:27:56.185396 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 8 00:27:56.185403 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 8 00:27:56.185411 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 8 00:27:56.185418 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 8 00:27:56.185424 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 8 00:27:56.185431 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 8 00:27:56.185438 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 8 00:27:56.185445 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185452 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185459 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 8 00:27:56.185466 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185473 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185480 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 8 00:27:56.185488 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185496 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185503 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 8 00:27:56.185510 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185517 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185524 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 8 00:27:56.185531 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185538 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185545 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 8 00:27:56.185552 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185559 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185567 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 8 00:27:56.185574 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185581 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185588 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 8 00:27:56.185595 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:27:56.185602 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 8 00:27:56.185609 kernel: GICv3: using LPI property table @0x00000800003e0000 May 8 00:27:56.185616 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 8 00:27:56.185623 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:27:56.185629 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.185636 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 8 00:27:56.185645 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 8 00:27:56.185652 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:27:56.185659 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:27:56.185666 kernel: Console: colour dummy device 80x25 May 8 00:27:56.185673 kernel: printk: console [tty0] enabled May 8 00:27:56.185680 kernel: ACPI: Core revision 20230628 May 8 00:27:56.185687 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:27:56.185694 kernel: pid_max: default: 81920 minimum: 640 May 8 00:27:56.185701 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:27:56.185708 kernel: landlock: Up and running. May 8 00:27:56.185717 kernel: SELinux: Initializing. May 8 00:27:56.185724 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:27:56.185731 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:27:56.185738 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 8 00:27:56.185746 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 8 00:27:56.185753 kernel: rcu: Hierarchical SRCU implementation. May 8 00:27:56.185760 kernel: rcu: Max phase no-delay instances is 400. May 8 00:27:56.185767 kernel: Platform MSI: ITS@0x100100040000 domain created May 8 00:27:56.185774 kernel: Platform MSI: ITS@0x100100060000 domain created May 8 00:27:56.185783 kernel: Platform MSI: ITS@0x100100080000 domain created May 8 00:27:56.185790 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 8 00:27:56.185797 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 8 00:27:56.185804 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 8 00:27:56.185810 kernel: Platform MSI: ITS@0x100100100000 domain created May 8 00:27:56.185817 kernel: Platform MSI: ITS@0x100100120000 domain created May 8 00:27:56.185824 kernel: PCI/MSI: ITS@0x100100040000 domain created May 8 00:27:56.185831 kernel: PCI/MSI: ITS@0x100100060000 domain created May 8 00:27:56.185838 kernel: PCI/MSI: ITS@0x100100080000 domain created May 8 00:27:56.185847 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 8 00:27:56.185854 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 8 00:27:56.185861 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 8 00:27:56.185868 kernel: PCI/MSI: ITS@0x100100100000 domain created May 8 00:27:56.185875 kernel: PCI/MSI: ITS@0x100100120000 domain created May 8 00:27:56.185882 kernel: Remapping and enabling EFI services. May 8 00:27:56.185889 kernel: smp: Bringing up secondary CPUs ... May 8 00:27:56.185896 kernel: Detected PIPT I-cache on CPU1 May 8 00:27:56.185903 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 8 00:27:56.185910 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 8 00:27:56.185918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.185925 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 8 00:27:56.185932 kernel: Detected PIPT I-cache on CPU2 May 8 00:27:56.185940 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 8 00:27:56.185947 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 8 00:27:56.185954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.185961 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 8 00:27:56.185968 kernel: Detected PIPT I-cache on CPU3 May 8 00:27:56.185975 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 8 00:27:56.185983 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 8 00:27:56.185991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.185997 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 8 00:27:56.186004 kernel: Detected PIPT I-cache on CPU4 May 8 00:27:56.186012 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 8 00:27:56.186019 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 8 00:27:56.186026 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186033 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 8 00:27:56.186040 kernel: Detected PIPT I-cache on CPU5 May 8 00:27:56.186047 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 8 00:27:56.186058 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 8 00:27:56.186065 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186072 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 8 00:27:56.186079 kernel: Detected PIPT I-cache on CPU6 May 8 00:27:56.186086 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 8 00:27:56.186094 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 8 00:27:56.186101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186108 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 8 00:27:56.186115 kernel: Detected PIPT I-cache on CPU7 May 8 00:27:56.186123 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 8 00:27:56.186130 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 8 00:27:56.186137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186144 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 8 00:27:56.186151 kernel: Detected PIPT I-cache on CPU8 May 8 00:27:56.186158 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 8 00:27:56.186165 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 8 00:27:56.186172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186179 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 8 00:27:56.186186 kernel: Detected PIPT I-cache on CPU9 May 8 00:27:56.186195 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 8 00:27:56.186202 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 8 00:27:56.186209 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186216 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 8 00:27:56.186223 kernel: Detected PIPT I-cache on CPU10 May 8 00:27:56.186230 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 8 00:27:56.186237 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 8 00:27:56.186244 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186251 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 8 00:27:56.186260 kernel: Detected PIPT I-cache on CPU11 May 8 00:27:56.186267 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 8 00:27:56.186274 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 8 00:27:56.186281 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186288 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 8 00:27:56.186295 kernel: Detected PIPT I-cache on CPU12 May 8 00:27:56.186303 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 8 00:27:56.186310 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 8 00:27:56.186317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186324 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 8 00:27:56.186332 kernel: Detected PIPT I-cache on CPU13 May 8 00:27:56.186339 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 8 00:27:56.186346 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 8 00:27:56.186353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186361 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 8 00:27:56.186368 kernel: Detected PIPT I-cache on CPU14 May 8 00:27:56.186375 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 8 00:27:56.186382 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 8 00:27:56.186389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186397 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 8 00:27:56.186404 kernel: Detected PIPT I-cache on CPU15 May 8 00:27:56.186411 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 8 00:27:56.186418 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 8 00:27:56.186425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186432 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 8 00:27:56.186439 kernel: Detected PIPT I-cache on CPU16 May 8 00:27:56.186446 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 8 00:27:56.186454 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 8 00:27:56.186470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186479 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 8 00:27:56.186486 kernel: Detected PIPT I-cache on CPU17 May 8 00:27:56.186493 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 8 00:27:56.186501 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 8 00:27:56.186508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186515 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 8 00:27:56.186523 kernel: Detected PIPT I-cache on CPU18 May 8 00:27:56.186530 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 8 00:27:56.186539 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 8 00:27:56.186546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186553 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 8 00:27:56.186560 kernel: Detected PIPT I-cache on CPU19 May 8 00:27:56.186568 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 8 00:27:56.186575 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 8 00:27:56.186583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186592 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 8 00:27:56.186599 kernel: Detected PIPT I-cache on CPU20 May 8 00:27:56.186606 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 8 00:27:56.186614 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 8 00:27:56.186621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186628 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 8 00:27:56.186636 kernel: Detected PIPT I-cache on CPU21 May 8 00:27:56.186643 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 8 00:27:56.186650 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 8 00:27:56.186659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186667 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 8 00:27:56.186674 kernel: Detected PIPT I-cache on CPU22 May 8 00:27:56.186681 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 8 00:27:56.186689 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 8 00:27:56.186697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186706 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 8 00:27:56.186714 kernel: Detected PIPT I-cache on CPU23 May 8 00:27:56.186721 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 8 00:27:56.186730 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 8 00:27:56.186737 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186745 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 8 00:27:56.186752 kernel: Detected PIPT I-cache on CPU24 May 8 00:27:56.186759 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 8 00:27:56.186767 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 8 00:27:56.186774 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186782 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 8 00:27:56.186789 kernel: Detected PIPT I-cache on CPU25 May 8 00:27:56.186796 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 8 00:27:56.186805 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 8 00:27:56.186813 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186820 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 8 00:27:56.186827 kernel: Detected PIPT I-cache on CPU26 May 8 00:27:56.186835 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 8 00:27:56.186842 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 8 00:27:56.186849 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186857 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 8 00:27:56.186864 kernel: Detected PIPT I-cache on CPU27 May 8 00:27:56.186873 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 8 00:27:56.186880 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 8 00:27:56.186887 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186895 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 8 00:27:56.186902 kernel: Detected PIPT I-cache on CPU28 May 8 00:27:56.186909 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 8 00:27:56.186917 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 8 00:27:56.186925 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186932 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 8 00:27:56.186939 kernel: Detected PIPT I-cache on CPU29 May 8 00:27:56.186948 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 8 00:27:56.186956 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 8 00:27:56.186963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.186970 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 8 00:27:56.186977 kernel: Detected PIPT I-cache on CPU30 May 8 00:27:56.186985 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 8 00:27:56.186992 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 8 00:27:56.187000 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187007 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 8 00:27:56.187016 kernel: Detected PIPT I-cache on CPU31 May 8 00:27:56.187023 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 8 00:27:56.187031 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 8 00:27:56.187038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187045 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 8 00:27:56.187055 kernel: Detected PIPT I-cache on CPU32 May 8 00:27:56.187062 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 8 00:27:56.187070 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 8 00:27:56.187077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187084 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 8 00:27:56.187093 kernel: Detected PIPT I-cache on CPU33 May 8 00:27:56.187101 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 8 00:27:56.187108 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 8 00:27:56.187116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187123 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 8 00:27:56.187130 kernel: Detected PIPT I-cache on CPU34 May 8 00:27:56.187137 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 8 00:27:56.187145 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 8 00:27:56.187152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187161 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 8 00:27:56.187168 kernel: Detected PIPT I-cache on CPU35 May 8 00:27:56.187176 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 8 00:27:56.187183 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 8 00:27:56.187191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187198 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 8 00:27:56.187205 kernel: Detected PIPT I-cache on CPU36 May 8 00:27:56.187214 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 8 00:27:56.187222 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 8 00:27:56.187231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187238 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 8 00:27:56.187246 kernel: Detected PIPT I-cache on CPU37 May 8 00:27:56.187253 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 8 00:27:56.187260 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 8 00:27:56.187268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187275 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 8 00:27:56.187282 kernel: Detected PIPT I-cache on CPU38 May 8 00:27:56.187290 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 8 00:27:56.187297 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 8 00:27:56.187306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187313 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 8 00:27:56.187321 kernel: Detected PIPT I-cache on CPU39 May 8 00:27:56.187328 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 8 00:27:56.187335 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 8 00:27:56.187343 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187350 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 8 00:27:56.187358 kernel: Detected PIPT I-cache on CPU40 May 8 00:27:56.187366 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 8 00:27:56.187374 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 8 00:27:56.187382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187389 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 8 00:27:56.187396 kernel: Detected PIPT I-cache on CPU41 May 8 00:27:56.187403 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 8 00:27:56.187411 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 8 00:27:56.187418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187426 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 8 00:27:56.187434 kernel: Detected PIPT I-cache on CPU42 May 8 00:27:56.187442 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 8 00:27:56.187449 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 8 00:27:56.187457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187465 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 8 00:27:56.187472 kernel: Detected PIPT I-cache on CPU43 May 8 00:27:56.187479 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 8 00:27:56.187487 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 8 00:27:56.187494 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187501 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 8 00:27:56.187510 kernel: Detected PIPT I-cache on CPU44 May 8 00:27:56.187518 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 8 00:27:56.187525 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 8 00:27:56.187532 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187540 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 8 00:27:56.187547 kernel: Detected PIPT I-cache on CPU45 May 8 00:27:56.187555 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 8 00:27:56.187562 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 8 00:27:56.187569 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187578 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 8 00:27:56.187585 kernel: Detected PIPT I-cache on CPU46 May 8 00:27:56.187593 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 8 00:27:56.187600 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 8 00:27:56.187608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187615 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 8 00:27:56.187622 kernel: Detected PIPT I-cache on CPU47 May 8 00:27:56.187630 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 8 00:27:56.187637 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 8 00:27:56.187646 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187653 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 8 00:27:56.187662 kernel: Detected PIPT I-cache on CPU48 May 8 00:27:56.187669 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 8 00:27:56.187677 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 8 00:27:56.187684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187691 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 8 00:27:56.187699 kernel: Detected PIPT I-cache on CPU49 May 8 00:27:56.187706 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 8 00:27:56.187713 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 8 00:27:56.187723 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187730 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 8 00:27:56.187737 kernel: Detected PIPT I-cache on CPU50 May 8 00:27:56.187744 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 8 00:27:56.187752 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 8 00:27:56.187759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187767 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 8 00:27:56.187774 kernel: Detected PIPT I-cache on CPU51 May 8 00:27:56.187781 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 8 00:27:56.187790 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 8 00:27:56.187797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187805 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 8 00:27:56.187812 kernel: Detected PIPT I-cache on CPU52 May 8 00:27:56.187819 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 8 00:27:56.187827 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 8 00:27:56.187834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187842 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 8 00:27:56.187849 kernel: Detected PIPT I-cache on CPU53 May 8 00:27:56.187856 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 8 00:27:56.187865 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 8 00:27:56.187872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187880 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 8 00:27:56.187887 kernel: Detected PIPT I-cache on CPU54 May 8 00:27:56.187894 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 8 00:27:56.187902 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 8 00:27:56.187909 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187918 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 8 00:27:56.187925 kernel: Detected PIPT I-cache on CPU55 May 8 00:27:56.187934 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 8 00:27:56.187942 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 8 00:27:56.187949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187957 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 8 00:27:56.187964 kernel: Detected PIPT I-cache on CPU56 May 8 00:27:56.187971 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 8 00:27:56.187979 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 8 00:27:56.187986 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.187993 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 8 00:27:56.188001 kernel: Detected PIPT I-cache on CPU57 May 8 00:27:56.188009 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 8 00:27:56.188017 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 8 00:27:56.188024 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188032 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 8 00:27:56.188039 kernel: Detected PIPT I-cache on CPU58 May 8 00:27:56.188046 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 8 00:27:56.188056 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 8 00:27:56.188063 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188071 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 8 00:27:56.188079 kernel: Detected PIPT I-cache on CPU59 May 8 00:27:56.188087 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 8 00:27:56.188095 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 8 00:27:56.188102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188109 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 8 00:27:56.188116 kernel: Detected PIPT I-cache on CPU60 May 8 00:27:56.188124 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 8 00:27:56.188131 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 8 00:27:56.188139 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188146 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 8 00:27:56.188155 kernel: Detected PIPT I-cache on CPU61 May 8 00:27:56.188163 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 8 00:27:56.188170 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 8 00:27:56.188178 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188185 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 8 00:27:56.188193 kernel: Detected PIPT I-cache on CPU62 May 8 00:27:56.188200 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 8 00:27:56.188207 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 8 00:27:56.188215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188223 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 8 00:27:56.188231 kernel: Detected PIPT I-cache on CPU63 May 8 00:27:56.188238 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 8 00:27:56.188246 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 8 00:27:56.188254 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188261 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 8 00:27:56.188268 kernel: Detected PIPT I-cache on CPU64 May 8 00:27:56.188276 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 8 00:27:56.188283 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 8 00:27:56.188292 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188299 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 8 00:27:56.188307 kernel: Detected PIPT I-cache on CPU65 May 8 00:27:56.188314 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 8 00:27:56.188321 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 8 00:27:56.188329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188336 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 8 00:27:56.188343 kernel: Detected PIPT I-cache on CPU66 May 8 00:27:56.188351 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 8 00:27:56.188358 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 8 00:27:56.188367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188374 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 8 00:27:56.188382 kernel: Detected PIPT I-cache on CPU67 May 8 00:27:56.188389 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 8 00:27:56.188397 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 8 00:27:56.188404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188412 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 8 00:27:56.188419 kernel: Detected PIPT I-cache on CPU68 May 8 00:27:56.188427 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 8 00:27:56.188436 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 8 00:27:56.188443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188450 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 8 00:27:56.188458 kernel: Detected PIPT I-cache on CPU69 May 8 00:27:56.188465 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 8 00:27:56.188473 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 8 00:27:56.188480 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188487 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 8 00:27:56.188494 kernel: Detected PIPT I-cache on CPU70 May 8 00:27:56.188502 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 8 00:27:56.188511 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 8 00:27:56.188519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188526 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 8 00:27:56.188533 kernel: Detected PIPT I-cache on CPU71 May 8 00:27:56.188541 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 8 00:27:56.188548 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 8 00:27:56.188556 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188563 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 8 00:27:56.188570 kernel: Detected PIPT I-cache on CPU72 May 8 00:27:56.188579 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 8 00:27:56.188587 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 8 00:27:56.188594 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188601 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 8 00:27:56.188609 kernel: Detected PIPT I-cache on CPU73 May 8 00:27:56.188616 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 8 00:27:56.188624 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 8 00:27:56.188631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188639 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 8 00:27:56.188646 kernel: Detected PIPT I-cache on CPU74 May 8 00:27:56.188655 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 8 00:27:56.188662 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 8 00:27:56.188670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188677 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 8 00:27:56.188684 kernel: Detected PIPT I-cache on CPU75 May 8 00:27:56.188692 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 8 00:27:56.188699 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 8 00:27:56.188707 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188714 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 8 00:27:56.188723 kernel: Detected PIPT I-cache on CPU76 May 8 00:27:56.188730 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 8 00:27:56.188737 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 8 00:27:56.188745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188752 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 8 00:27:56.188760 kernel: Detected PIPT I-cache on CPU77 May 8 00:27:56.188767 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 8 00:27:56.188774 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 8 00:27:56.188782 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188789 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 8 00:27:56.188798 kernel: Detected PIPT I-cache on CPU78 May 8 00:27:56.188805 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 8 00:27:56.188813 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 8 00:27:56.188820 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188828 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 8 00:27:56.188835 kernel: Detected PIPT I-cache on CPU79 May 8 00:27:56.188842 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 8 00:27:56.188850 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 8 00:27:56.188857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:27:56.188866 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 8 00:27:56.188873 kernel: smp: Brought up 1 node, 80 CPUs May 8 00:27:56.188881 kernel: SMP: Total of 80 processors activated. May 8 00:27:56.188888 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:27:56.188895 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:27:56.188903 kernel: CPU features: detected: Common not Private translations May 8 00:27:56.188910 kernel: CPU features: detected: CRC32 instructions May 8 00:27:56.188918 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:27:56.188925 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:27:56.188934 kernel: CPU features: detected: LSE atomic instructions May 8 00:27:56.188942 kernel: CPU features: detected: Privileged Access Never May 8 00:27:56.188949 kernel: CPU features: detected: RAS Extension Support May 8 00:27:56.188957 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:27:56.188964 kernel: CPU: All CPU(s) started at EL2 May 8 00:27:56.188971 kernel: alternatives: applying system-wide alternatives May 8 00:27:56.188979 kernel: devtmpfs: initialized May 8 00:27:56.188986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:27:56.188994 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:27:56.189003 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:27:56.189010 kernel: SMBIOS 3.4.0 present. May 8 00:27:56.189018 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 8 00:27:56.189025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:27:56.189033 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 8 00:27:56.189040 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:27:56.189047 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:27:56.189057 kernel: audit: initializing netlink subsys (disabled) May 8 00:27:56.189065 kernel: audit: type=2000 audit(0.043:1): state=initialized audit_enabled=0 res=1 May 8 00:27:56.189073 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:27:56.189081 kernel: cpuidle: using governor menu May 8 00:27:56.189088 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:27:56.189096 kernel: ASID allocator initialised with 32768 entries May 8 00:27:56.189103 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:27:56.189111 kernel: Serial: AMBA PL011 UART driver May 8 00:27:56.189118 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:27:56.189126 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:27:56.189133 kernel: Modules: 509264 pages in range for PLT usage May 8 00:27:56.189142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:27:56.189149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:27:56.189157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:27:56.189164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:27:56.189172 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:27:56.189179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:27:56.189187 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:27:56.189194 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:27:56.189202 kernel: ACPI: Added _OSI(Module Device) May 8 00:27:56.189210 kernel: ACPI: Added _OSI(Processor Device) May 8 00:27:56.189218 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:27:56.189225 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:27:56.189232 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 8 00:27:56.189240 kernel: ACPI: Interpreter enabled May 8 00:27:56.189247 kernel: ACPI: Using GIC for interrupt routing May 8 00:27:56.189254 kernel: ACPI: MCFG table detected, 8 entries May 8 00:27:56.189262 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189269 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189278 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189286 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189293 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189300 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189308 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189315 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 8 00:27:56.189323 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 8 00:27:56.189330 kernel: printk: console [ttyAMA0] enabled May 8 00:27:56.189338 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 8 00:27:56.189347 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 8 00:27:56.189482 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.189554 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.189618 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.189682 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.189744 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 8 00:27:56.189806 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 8 00:27:56.189819 kernel: PCI host bridge to bus 000d:00 May 8 00:27:56.189892 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 8 00:27:56.189952 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 8 00:27:56.190009 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 8 00:27:56.190091 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:27:56.190165 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:27:56.190236 kernel: pci 000d:00:01.0: enabling Extended Tags May 8 00:27:56.190301 kernel: pci 000d:00:01.0: supports D1 D2 May 8 00:27:56.190365 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.190438 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:27:56.190502 kernel: pci 000d:00:02.0: supports D1 D2 May 8 00:27:56.190567 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 8 00:27:56.190638 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:27:56.190707 kernel: pci 000d:00:03.0: supports D1 D2 May 8 00:27:56.190771 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.190845 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:27:56.190910 kernel: pci 000d:00:04.0: supports D1 D2 May 8 00:27:56.190973 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 8 00:27:56.190982 kernel: acpiphp: Slot [1] registered May 8 00:27:56.190990 kernel: acpiphp: Slot [2] registered May 8 00:27:56.191000 kernel: acpiphp: Slot [3] registered May 8 00:27:56.191007 kernel: acpiphp: Slot [4] registered May 8 00:27:56.191087 kernel: pci_bus 000d:00: on NUMA node 0 May 8 00:27:56.191152 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.191216 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.191282 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.191347 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.191414 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.191492 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.191558 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.191621 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.191686 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.191752 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.191815 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.191882 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.191946 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 8 00:27:56.192011 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 8 00:27:56.192089 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 8 00:27:56.192154 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 8 00:27:56.192218 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 8 00:27:56.192283 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 8 00:27:56.192348 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 8 00:27:56.192414 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 8 00:27:56.192479 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.192542 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.192607 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.192670 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.192735 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.192800 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.192865 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.192931 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.192994 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.193086 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.193154 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.193217 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.193279 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.193340 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.193406 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.193468 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.193532 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.193596 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 8 00:27:56.193659 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 8 00:27:56.193723 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 8 00:27:56.193786 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 8 00:27:56.193854 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 8 00:27:56.193917 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 8 00:27:56.193981 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 8 00:27:56.194045 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 8 00:27:56.194112 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 8 00:27:56.194176 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 8 00:27:56.194239 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 8 00:27:56.194302 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 8 00:27:56.194359 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 8 00:27:56.194429 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 8 00:27:56.194488 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 8 00:27:56.194556 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 8 00:27:56.194617 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 8 00:27:56.194694 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 8 00:27:56.194757 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 8 00:27:56.194823 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 8 00:27:56.194883 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 8 00:27:56.194892 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 8 00:27:56.194961 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.195027 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.195093 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.195154 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.195216 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 8 00:27:56.195277 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 8 00:27:56.195286 kernel: PCI host bridge to bus 0000:00 May 8 00:27:56.195352 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 8 00:27:56.195411 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 8 00:27:56.195469 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:27:56.195540 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:27:56.195613 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:27:56.195678 kernel: pci 0000:00:01.0: enabling Extended Tags May 8 00:27:56.195742 kernel: pci 0000:00:01.0: supports D1 D2 May 8 00:27:56.195806 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.195880 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:27:56.195944 kernel: pci 0000:00:02.0: supports D1 D2 May 8 00:27:56.196006 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 8 00:27:56.196081 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:27:56.196146 kernel: pci 0000:00:03.0: supports D1 D2 May 8 00:27:56.196210 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.196280 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:27:56.196345 kernel: pci 0000:00:04.0: supports D1 D2 May 8 00:27:56.196410 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 8 00:27:56.196419 kernel: acpiphp: Slot [1-1] registered May 8 00:27:56.196427 kernel: acpiphp: Slot [2-1] registered May 8 00:27:56.196434 kernel: acpiphp: Slot [3-1] registered May 8 00:27:56.196441 kernel: acpiphp: Slot [4-1] registered May 8 00:27:56.196496 kernel: pci_bus 0000:00: on NUMA node 0 May 8 00:27:56.196561 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.196629 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.196694 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.196758 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.196821 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.196886 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.196948 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.197013 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.197081 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.197147 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.197210 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.197274 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.197338 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 8 00:27:56.197402 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 8 00:27:56.197468 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 8 00:27:56.197532 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 8 00:27:56.197595 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 8 00:27:56.197658 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 8 00:27:56.197721 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 8 00:27:56.197786 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 8 00:27:56.197848 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.197912 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.197978 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198041 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198109 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198174 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198240 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198303 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198369 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198432 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198496 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198562 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198628 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198690 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198755 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.198818 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.198881 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.198945 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 8 00:27:56.199007 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 8 00:27:56.199077 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 8 00:27:56.199139 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 8 00:27:56.199204 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 8 00:27:56.199268 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 8 00:27:56.199334 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 8 00:27:56.199399 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 8 00:27:56.199462 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 8 00:27:56.199526 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 8 00:27:56.199589 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 8 00:27:56.199651 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 8 00:27:56.199709 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 8 00:27:56.199778 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 8 00:27:56.199838 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 8 00:27:56.199906 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 8 00:27:56.199966 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 8 00:27:56.200040 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 8 00:27:56.200108 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 8 00:27:56.200174 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 8 00:27:56.200235 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 8 00:27:56.200244 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 8 00:27:56.200313 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.200376 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.200441 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.200504 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.200565 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 8 00:27:56.200626 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 8 00:27:56.200636 kernel: PCI host bridge to bus 0005:00 May 8 00:27:56.200701 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 8 00:27:56.200759 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 8 00:27:56.200818 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 8 00:27:56.200892 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:27:56.200963 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:27:56.201028 kernel: pci 0005:00:01.0: supports D1 D2 May 8 00:27:56.201095 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.201168 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:27:56.201234 kernel: pci 0005:00:03.0: supports D1 D2 May 8 00:27:56.201299 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.201373 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:27:56.201436 kernel: pci 0005:00:05.0: supports D1 D2 May 8 00:27:56.201501 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 8 00:27:56.201574 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 8 00:27:56.201638 kernel: pci 0005:00:07.0: supports D1 D2 May 8 00:27:56.201705 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 8 00:27:56.201714 kernel: acpiphp: Slot [1-2] registered May 8 00:27:56.201722 kernel: acpiphp: Slot [2-2] registered May 8 00:27:56.201798 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 8 00:27:56.201865 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 8 00:27:56.201931 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 8 00:27:56.202006 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 8 00:27:56.202077 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 8 00:27:56.202147 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 8 00:27:56.202205 kernel: pci_bus 0005:00: on NUMA node 0 May 8 00:27:56.202271 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.202335 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.202398 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.202463 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.202529 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.202618 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.202684 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.202750 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.202815 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 8 00:27:56.202883 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.202960 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.203026 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 8 00:27:56.203097 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 8 00:27:56.203161 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 8 00:27:56.203226 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 8 00:27:56.203290 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 8 00:27:56.203356 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 8 00:27:56.203420 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 8 00:27:56.203484 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 8 00:27:56.203551 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 8 00:27:56.203614 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.203678 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.203742 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.203806 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.203871 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.203934 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.203999 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.204068 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.204133 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.204197 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.204261 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.204327 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.204390 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.204455 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.204520 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.204584 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.204650 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.204716 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 8 00:27:56.204781 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 8 00:27:56.204844 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 8 00:27:56.204908 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 8 00:27:56.204972 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 8 00:27:56.205052 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 8 00:27:56.205121 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 8 00:27:56.205187 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 8 00:27:56.205251 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 8 00:27:56.205314 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 8 00:27:56.205380 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 8 00:27:56.205445 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 8 00:27:56.205512 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 8 00:27:56.205575 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 8 00:27:56.205642 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 8 00:27:56.205703 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 8 00:27:56.205760 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 8 00:27:56.205832 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 8 00:27:56.205892 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 8 00:27:56.205972 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 8 00:27:56.206032 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 8 00:27:56.206103 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 8 00:27:56.206164 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 8 00:27:56.206232 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 8 00:27:56.206296 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 8 00:27:56.206306 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 8 00:27:56.206375 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.206439 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.206501 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.206563 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.206625 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 8 00:27:56.206688 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 8 00:27:56.206698 kernel: PCI host bridge to bus 0003:00 May 8 00:27:56.206763 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 8 00:27:56.206822 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 8 00:27:56.206879 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 8 00:27:56.206952 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:27:56.207030 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:27:56.207102 kernel: pci 0003:00:01.0: supports D1 D2 May 8 00:27:56.207172 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.207248 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:27:56.207312 kernel: pci 0003:00:03.0: supports D1 D2 May 8 00:27:56.207376 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.207447 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:27:56.207515 kernel: pci 0003:00:05.0: supports D1 D2 May 8 00:27:56.207578 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 8 00:27:56.207588 kernel: acpiphp: Slot [1-3] registered May 8 00:27:56.207595 kernel: acpiphp: Slot [2-3] registered May 8 00:27:56.207669 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 8 00:27:56.207743 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 8 00:27:56.207810 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 8 00:27:56.207880 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 8 00:27:56.207949 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:27:56.208018 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 8 00:27:56.208088 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 8 00:27:56.208154 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 8 00:27:56.208221 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 8 00:27:56.208287 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 8 00:27:56.208360 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 8 00:27:56.208428 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 8 00:27:56.208495 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 8 00:27:56.208562 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 8 00:27:56.208627 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 8 00:27:56.208694 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 8 00:27:56.208759 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 8 00:27:56.208825 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 8 00:27:56.208894 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 8 00:27:56.208953 kernel: pci_bus 0003:00: on NUMA node 0 May 8 00:27:56.209019 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.209087 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.209151 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.209216 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.209283 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.209350 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.209415 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 8 00:27:56.209480 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 8 00:27:56.209543 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 8 00:27:56.209619 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 8 00:27:56.209685 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 8 00:27:56.209750 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 8 00:27:56.209813 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 8 00:27:56.209881 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 8 00:27:56.209945 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.210009 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.210077 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.210140 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.210206 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.210270 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.210333 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.210400 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.210463 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.210527 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.210593 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.210658 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.210721 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.210786 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 8 00:27:56.210850 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 8 00:27:56.210918 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 8 00:27:56.210985 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 8 00:27:56.211053 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 8 00:27:56.211122 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 8 00:27:56.211188 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 8 00:27:56.211257 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 8 00:27:56.211327 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 8 00:27:56.211392 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 8 00:27:56.211460 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 8 00:27:56.211525 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 8 00:27:56.211592 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 8 00:27:56.211658 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 8 00:27:56.211725 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 8 00:27:56.211794 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 8 00:27:56.211860 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 8 00:27:56.211927 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 8 00:27:56.211992 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 8 00:27:56.212061 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 8 00:27:56.212128 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 8 00:27:56.212192 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 8 00:27:56.212256 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 8 00:27:56.212322 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 8 00:27:56.212383 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 8 00:27:56.212440 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 8 00:27:56.212498 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 8 00:27:56.212577 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 8 00:27:56.212638 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 8 00:27:56.212710 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 8 00:27:56.212770 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 8 00:27:56.212842 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 8 00:27:56.212901 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 8 00:27:56.212912 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 8 00:27:56.212982 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.213048 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.213113 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.213175 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.213237 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 8 00:27:56.213299 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 8 00:27:56.213309 kernel: PCI host bridge to bus 000c:00 May 8 00:27:56.213374 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 8 00:27:56.213435 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 8 00:27:56.213492 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 8 00:27:56.213563 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:27:56.213637 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:27:56.213701 kernel: pci 000c:00:01.0: enabling Extended Tags May 8 00:27:56.213767 kernel: pci 000c:00:01.0: supports D1 D2 May 8 00:27:56.213830 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.213906 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:27:56.213972 kernel: pci 000c:00:02.0: supports D1 D2 May 8 00:27:56.214040 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 8 00:27:56.214117 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:27:56.214183 kernel: pci 000c:00:03.0: supports D1 D2 May 8 00:27:56.214248 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.214321 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:27:56.214389 kernel: pci 000c:00:04.0: supports D1 D2 May 8 00:27:56.214454 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 8 00:27:56.214466 kernel: acpiphp: Slot [1-4] registered May 8 00:27:56.214474 kernel: acpiphp: Slot [2-4] registered May 8 00:27:56.214483 kernel: acpiphp: Slot [3-2] registered May 8 00:27:56.214491 kernel: acpiphp: Slot [4-2] registered May 8 00:27:56.214549 kernel: pci_bus 000c:00: on NUMA node 0 May 8 00:27:56.214613 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.214681 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.214747 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.214812 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.214877 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.214940 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.215005 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.215133 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.215203 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.215266 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.215328 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.215391 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.215454 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 8 00:27:56.215516 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 8 00:27:56.215578 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 8 00:27:56.215643 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 8 00:27:56.215705 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 8 00:27:56.215768 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 8 00:27:56.215830 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 8 00:27:56.215892 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 8 00:27:56.215954 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216016 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216082 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216149 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216211 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216274 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216337 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216400 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216462 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216525 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216587 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216653 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216715 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216777 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216840 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.216902 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.216965 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.217028 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 8 00:27:56.217094 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 8 00:27:56.217160 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 8 00:27:56.217222 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 8 00:27:56.217286 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 8 00:27:56.217348 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 8 00:27:56.217411 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 8 00:27:56.217474 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 8 00:27:56.217540 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 8 00:27:56.217603 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 8 00:27:56.217666 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 8 00:27:56.217727 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 8 00:27:56.217784 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 8 00:27:56.217855 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 8 00:27:56.217916 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 8 00:27:56.217995 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 8 00:27:56.218058 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 8 00:27:56.218126 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 8 00:27:56.218187 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 8 00:27:56.218257 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 8 00:27:56.218319 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 8 00:27:56.218330 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 8 00:27:56.218401 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.218466 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.218527 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.218589 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.218652 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 8 00:27:56.218713 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 8 00:27:56.218725 kernel: PCI host bridge to bus 0002:00 May 8 00:27:56.218793 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 8 00:27:56.218850 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 8 00:27:56.218908 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 8 00:27:56.218978 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:27:56.219057 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:27:56.219124 kernel: pci 0002:00:01.0: supports D1 D2 May 8 00:27:56.219190 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.219263 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:27:56.219328 kernel: pci 0002:00:03.0: supports D1 D2 May 8 00:27:56.219393 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.219464 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:27:56.219528 kernel: pci 0002:00:05.0: supports D1 D2 May 8 00:27:56.219592 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 8 00:27:56.219666 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 8 00:27:56.219732 kernel: pci 0002:00:07.0: supports D1 D2 May 8 00:27:56.219796 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 8 00:27:56.219806 kernel: acpiphp: Slot [1-5] registered May 8 00:27:56.219814 kernel: acpiphp: Slot [2-5] registered May 8 00:27:56.219822 kernel: acpiphp: Slot [3-3] registered May 8 00:27:56.219830 kernel: acpiphp: Slot [4-3] registered May 8 00:27:56.219887 kernel: pci_bus 0002:00: on NUMA node 0 May 8 00:27:56.219952 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.220018 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.220085 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:27:56.220154 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.220220 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.220285 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.220351 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.220415 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.220479 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.220544 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.220609 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.220675 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.220741 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 8 00:27:56.220806 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 8 00:27:56.220870 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 8 00:27:56.220934 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 8 00:27:56.220997 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 8 00:27:56.221066 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 8 00:27:56.221133 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 8 00:27:56.221197 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 8 00:27:56.221261 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.221326 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.221390 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.221454 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.221518 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.221584 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.221650 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.221714 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.221778 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.221842 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.221907 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.221974 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.222038 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.222110 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.222177 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.222246 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.222309 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.222376 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 8 00:27:56.222442 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 8 00:27:56.222505 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 8 00:27:56.222570 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 8 00:27:56.222633 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 8 00:27:56.222703 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 8 00:27:56.222767 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 8 00:27:56.222833 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 8 00:27:56.222898 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 8 00:27:56.222962 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 8 00:27:56.223026 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 8 00:27:56.223298 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 8 00:27:56.223361 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 8 00:27:56.223429 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 8 00:27:56.223488 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 8 00:27:56.223555 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 8 00:27:56.223628 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 8 00:27:56.223709 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 8 00:27:56.223768 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 8 00:27:56.223833 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 8 00:27:56.223891 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 8 00:27:56.223902 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 8 00:27:56.223970 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.224032 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.224101 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.224162 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.224222 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 8 00:27:56.224283 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 8 00:27:56.224293 kernel: PCI host bridge to bus 0001:00 May 8 00:27:56.224357 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 8 00:27:56.224417 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 8 00:27:56.224472 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 8 00:27:56.224543 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:27:56.224616 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:27:56.224679 kernel: pci 0001:00:01.0: enabling Extended Tags May 8 00:27:56.224742 kernel: pci 0001:00:01.0: supports D1 D2 May 8 00:27:56.224804 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.224878 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:27:56.224941 kernel: pci 0001:00:02.0: supports D1 D2 May 8 00:27:56.225004 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 8 00:27:56.225082 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:27:56.225147 kernel: pci 0001:00:03.0: supports D1 D2 May 8 00:27:56.225210 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.225279 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:27:56.225345 kernel: pci 0001:00:04.0: supports D1 D2 May 8 00:27:56.225409 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 8 00:27:56.225419 kernel: acpiphp: Slot [1-6] registered May 8 00:27:56.225489 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 8 00:27:56.225555 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 8 00:27:56.225621 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 8 00:27:56.225686 kernel: pci 0001:01:00.0: PME# supported from D3cold May 8 00:27:56.225754 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 8 00:27:56.225827 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 8 00:27:56.225893 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 8 00:27:56.225957 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 8 00:27:56.226023 kernel: pci 0001:01:00.1: PME# supported from D3cold May 8 00:27:56.226035 kernel: acpiphp: Slot [2-6] registered May 8 00:27:56.226043 kernel: acpiphp: Slot [3-4] registered May 8 00:27:56.226228 kernel: acpiphp: Slot [4-4] registered May 8 00:27:56.226307 kernel: pci_bus 0001:00: on NUMA node 0 May 8 00:27:56.226375 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:27:56.226439 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:27:56.226502 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.226565 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:27:56.226629 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.226699 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.226768 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.226833 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.226896 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.226959 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.227023 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 8 00:27:56.227089 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 8 00:27:56.227156 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 8 00:27:56.227219 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 8 00:27:56.227282 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 8 00:27:56.227345 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 8 00:27:56.227408 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 8 00:27:56.227470 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 8 00:27:56.227536 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.227599 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.227663 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.227726 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.227789 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.227852 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.227914 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.227977 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.228039 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.228106 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.228169 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.228234 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.228298 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.228360 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.228424 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.228485 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.228552 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 8 00:27:56.228619 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 8 00:27:56.228684 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 8 00:27:56.228751 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 8 00:27:56.228814 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 8 00:27:56.228877 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 8 00:27:56.228939 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 8 00:27:56.229002 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 8 00:27:56.229068 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 8 00:27:56.229134 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 8 00:27:56.229197 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 8 00:27:56.229260 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 8 00:27:56.229323 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 8 00:27:56.229385 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 8 00:27:56.229448 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 8 00:27:56.229512 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 8 00:27:56.229572 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 8 00:27:56.229629 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 8 00:27:56.229705 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 8 00:27:56.229765 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 8 00:27:56.229831 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 8 00:27:56.229891 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 8 00:27:56.229959 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 8 00:27:56.230019 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 8 00:27:56.230223 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 8 00:27:56.230287 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 8 00:27:56.230297 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 8 00:27:56.230366 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:27:56.230432 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:27:56.230492 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 8 00:27:56.230552 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:27:56.230612 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 8 00:27:56.230673 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 8 00:27:56.230683 kernel: PCI host bridge to bus 0004:00 May 8 00:27:56.230747 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 8 00:27:56.230806 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 8 00:27:56.230862 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 8 00:27:56.230932 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:27:56.231004 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:27:56.231074 kernel: pci 0004:00:01.0: supports D1 D2 May 8 00:27:56.231137 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 8 00:27:56.231208 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:27:56.231275 kernel: pci 0004:00:03.0: supports D1 D2 May 8 00:27:56.231339 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 8 00:27:56.231410 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:27:56.231475 kernel: pci 0004:00:05.0: supports D1 D2 May 8 00:27:56.231538 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 8 00:27:56.231610 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 8 00:27:56.231677 kernel: pci 0004:01:00.0: enabling Extended Tags May 8 00:27:56.231744 kernel: pci 0004:01:00.0: supports D1 D2 May 8 00:27:56.231809 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:27:56.231886 kernel: pci_bus 0004:02: extended config space not accessible May 8 00:27:56.231961 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 8 00:27:56.232030 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 8 00:27:56.232101 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 8 00:27:56.232167 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 8 00:27:56.232237 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 8 00:27:56.232303 kernel: pci 0004:02:00.0: supports D1 D2 May 8 00:27:56.232370 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:27:56.232442 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 8 00:27:56.232507 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 8 00:27:56.232573 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:27:56.232629 kernel: pci_bus 0004:00: on NUMA node 0 May 8 00:27:56.232696 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 8 00:27:56.232758 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:27:56.232822 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:27:56.232885 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 8 00:27:56.232949 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:27:56.233012 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.233080 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:27:56.233147 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 8 00:27:56.233210 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 8 00:27:56.233274 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 8 00:27:56.233337 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 8 00:27:56.233400 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 8 00:27:56.233463 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 8 00:27:56.233527 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.233594 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.233656 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.233720 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.233783 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.233846 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.233909 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.233971 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.234034 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.234099 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.234165 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.234227 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.234294 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 8 00:27:56.234359 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 8 00:27:56.234424 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 8 00:27:56.234493 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 8 00:27:56.234560 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 8 00:27:56.234627 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 8 00:27:56.234697 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 8 00:27:56.234761 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 8 00:27:56.234826 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 8 00:27:56.234889 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 8 00:27:56.234953 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 8 00:27:56.235017 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 8 00:27:56.235086 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 8 00:27:56.235150 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 8 00:27:56.235214 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 8 00:27:56.235278 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 8 00:27:56.235341 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 8 00:27:56.235404 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 8 00:27:56.235471 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 8 00:27:56.235530 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 8 00:27:56.235589 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 8 00:27:56.235645 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 8 00:27:56.235713 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 8 00:27:56.235772 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 8 00:27:56.235836 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 8 00:27:56.235902 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 8 00:27:56.235962 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 8 00:27:56.236031 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 8 00:27:56.236094 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 8 00:27:56.236104 kernel: iommu: Default domain type: Translated May 8 00:27:56.236112 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:27:56.236120 kernel: efivars: Registered efivars operations May 8 00:27:56.236187 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 8 00:27:56.236254 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 8 00:27:56.236324 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 8 00:27:56.236334 kernel: vgaarb: loaded May 8 00:27:56.236342 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:27:56.236350 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:27:56.236358 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:27:56.236366 kernel: pnp: PnP ACPI init May 8 00:27:56.236433 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 8 00:27:56.236494 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 8 00:27:56.236553 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 8 00:27:56.236610 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 8 00:27:56.236667 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 8 00:27:56.236723 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 8 00:27:56.236783 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 8 00:27:56.236839 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 8 00:27:56.236851 kernel: pnp: PnP ACPI: found 1 devices May 8 00:27:56.236860 kernel: NET: Registered PF_INET protocol family May 8 00:27:56.236868 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:27:56.236876 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 8 00:27:56.236884 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:27:56.236892 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:27:56.236899 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 00:27:56.236908 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 8 00:27:56.236915 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 00:27:56.236925 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 00:27:56.236933 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:27:56.236999 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 8 00:27:56.237010 kernel: kvm [1]: IPA Size Limit: 48 bits May 8 00:27:56.237018 kernel: kvm [1]: GICv3: no GICV resource entry May 8 00:27:56.237026 kernel: kvm [1]: disabling GICv2 emulation May 8 00:27:56.237034 kernel: kvm [1]: GIC system register CPU interface enabled May 8 00:27:56.237042 kernel: kvm [1]: vgic interrupt IRQ9 May 8 00:27:56.237052 kernel: kvm [1]: VHE mode initialized successfully May 8 00:27:56.237063 kernel: Initialise system trusted keyrings May 8 00:27:56.237070 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 8 00:27:56.237078 kernel: Key type asymmetric registered May 8 00:27:56.237086 kernel: Asymmetric key parser 'x509' registered May 8 00:27:56.237093 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:27:56.237101 kernel: io scheduler mq-deadline registered May 8 00:27:56.237109 kernel: io scheduler kyber registered May 8 00:27:56.237117 kernel: io scheduler bfq registered May 8 00:27:56.237125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:27:56.237134 kernel: ACPI: button: Power Button [PWRB] May 8 00:27:56.237142 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 8 00:27:56.237149 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:27:56.237221 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 8 00:27:56.237284 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.237344 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.237403 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 8 00:27:56.237461 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 8 00:27:56.237521 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 8 00:27:56.237588 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 8 00:27:56.237648 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.237706 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.237765 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 8 00:27:56.237823 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 8 00:27:56.237884 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 8 00:27:56.237949 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 8 00:27:56.238009 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.238070 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.238129 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 8 00:27:56.238188 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 8 00:27:56.238246 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 8 00:27:56.238316 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 8 00:27:56.238375 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.238433 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.238491 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 8 00:27:56.238550 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 8 00:27:56.238608 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 8 00:27:56.238687 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 8 00:27:56.238748 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.238807 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.238865 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 8 00:27:56.238923 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 8 00:27:56.238982 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 8 00:27:56.239053 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 8 00:27:56.239116 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.239176 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.239234 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 8 00:27:56.239292 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 8 00:27:56.239350 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 8 00:27:56.239417 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 8 00:27:56.239478 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.239538 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.239597 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 8 00:27:56.239656 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 8 00:27:56.239715 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 8 00:27:56.239779 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 8 00:27:56.239842 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:27:56.239900 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:27:56.239959 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 8 00:27:56.240017 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 8 00:27:56.240079 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 8 00:27:56.240089 kernel: thunder_xcv, ver 1.0 May 8 00:27:56.240097 kernel: thunder_bgx, ver 1.0 May 8 00:27:56.240107 kernel: nicpf, ver 1.0 May 8 00:27:56.240115 kernel: nicvf, ver 1.0 May 8 00:27:56.240181 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:27:56.240240 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:27:54 UTC (1746664074) May 8 00:27:56.240250 kernel: efifb: probing for efifb May 8 00:27:56.240258 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 8 00:27:56.240266 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 8 00:27:56.240274 kernel: efifb: scrolling: redraw May 8 00:27:56.240282 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:27:56.240292 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:27:56.240299 kernel: fb0: EFI VGA frame buffer device May 8 00:27:56.240307 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 8 00:27:56.240316 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:27:56.240324 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:27:56.240332 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:27:56.240340 kernel: watchdog: Hard watchdog permanently disabled May 8 00:27:56.240348 kernel: NET: Registered PF_INET6 protocol family May 8 00:27:56.240355 kernel: Segment Routing with IPv6 May 8 00:27:56.240365 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:27:56.240373 kernel: NET: Registered PF_PACKET protocol family May 8 00:27:56.240380 kernel: Key type dns_resolver registered May 8 00:27:56.240388 kernel: registered taskstats version 1 May 8 00:27:56.240396 kernel: Loading compiled-in X.509 certificates May 8 00:27:56.240404 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 8 00:27:56.240411 kernel: Key type .fscrypt registered May 8 00:27:56.240419 kernel: Key type fscrypt-provisioning registered May 8 00:27:56.240427 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:27:56.240436 kernel: ima: Allocated hash algorithm: sha1 May 8 00:27:56.240444 kernel: ima: No architecture policies found May 8 00:27:56.240452 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:27:56.240517 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 8 00:27:56.240581 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 8 00:27:56.240647 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 8 00:27:56.240711 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 8 00:27:56.240776 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 8 00:27:56.240839 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 8 00:27:56.240906 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 8 00:27:56.240970 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 8 00:27:56.241036 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 8 00:27:56.241105 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 8 00:27:56.241171 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 8 00:27:56.241235 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 8 00:27:56.241301 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 8 00:27:56.241365 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 8 00:27:56.241434 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 8 00:27:56.241497 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 8 00:27:56.241563 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 8 00:27:56.241627 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 8 00:27:56.241692 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 8 00:27:56.241755 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 8 00:27:56.241820 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 8 00:27:56.241883 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 8 00:27:56.241950 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 8 00:27:56.242014 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 8 00:27:56.242082 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 8 00:27:56.242147 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 8 00:27:56.242211 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 8 00:27:56.242274 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 8 00:27:56.242338 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 8 00:27:56.242402 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 8 00:27:56.242470 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 8 00:27:56.242535 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 8 00:27:56.242600 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 8 00:27:56.242663 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 8 00:27:56.242728 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 8 00:27:56.242791 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 8 00:27:56.242857 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 8 00:27:56.242920 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 8 00:27:56.242985 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 8 00:27:56.243054 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 8 00:27:56.243119 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 8 00:27:56.243182 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 8 00:27:56.243246 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 8 00:27:56.243312 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 8 00:27:56.243376 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 8 00:27:56.243439 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 8 00:27:56.243504 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 8 00:27:56.243569 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 8 00:27:56.243633 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 8 00:27:56.243696 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 8 00:27:56.243761 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 8 00:27:56.243824 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 8 00:27:56.243889 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 8 00:27:56.243953 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 8 00:27:56.244017 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 8 00:27:56.244088 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 8 00:27:56.244153 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 8 00:27:56.244217 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 8 00:27:56.244281 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 8 00:27:56.244344 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 8 00:27:56.244412 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 8 00:27:56.244422 kernel: clk: Disabling unused clocks May 8 00:27:56.244430 kernel: Freeing unused kernel memory: 38336K May 8 00:27:56.244440 kernel: Run /init as init process May 8 00:27:56.244448 kernel: with arguments: May 8 00:27:56.244455 kernel: /init May 8 00:27:56.244463 kernel: with environment: May 8 00:27:56.244471 kernel: HOME=/ May 8 00:27:56.244478 kernel: TERM=linux May 8 00:27:56.244486 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:27:56.244495 systemd[1]: Successfully made /usr/ read-only. May 8 00:27:56.244508 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:27:56.244517 systemd[1]: Detected architecture arm64. May 8 00:27:56.244525 systemd[1]: Running in initrd. May 8 00:27:56.244533 systemd[1]: No hostname configured, using default hostname. May 8 00:27:56.244541 systemd[1]: Hostname set to . May 8 00:27:56.244549 systemd[1]: Initializing machine ID from random generator. May 8 00:27:56.244557 systemd[1]: Queued start job for default target initrd.target. May 8 00:27:56.244565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:27:56.244575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:27:56.244584 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:27:56.244593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:27:56.244601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:27:56.244610 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:27:56.244619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:27:56.244629 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:27:56.244637 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:27:56.244646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:27:56.244654 systemd[1]: Reached target paths.target - Path Units. May 8 00:27:56.244662 systemd[1]: Reached target slices.target - Slice Units. May 8 00:27:56.244670 systemd[1]: Reached target swap.target - Swaps. May 8 00:27:56.244678 systemd[1]: Reached target timers.target - Timer Units. May 8 00:27:56.244687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:27:56.244695 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:27:56.244704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:27:56.244713 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:27:56.244721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:27:56.244729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:27:56.244737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:27:56.244746 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:27:56.244754 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:27:56.244762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:27:56.244770 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:27:56.244780 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:27:56.244789 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:27:56.244798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:27:56.244829 systemd-journald[898]: Collecting audit messages is disabled. May 8 00:27:56.244850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:27:56.244859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:27:56.244867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:27:56.244875 kernel: Bridge firewalling registered May 8 00:27:56.244884 systemd-journald[898]: Journal started May 8 00:27:56.244903 systemd-journald[898]: Runtime Journal (/run/log/journal/39e36433458a4b4192d5936933a50e71) is 8M, max 4G, 3.9G free. May 8 00:27:56.203216 systemd-modules-load[902]: Inserted module 'overlay' May 8 00:27:56.284399 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:27:56.226894 systemd-modules-load[902]: Inserted module 'br_netfilter' May 8 00:27:56.289940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:27:56.300605 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:27:56.311336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:27:56.321898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:27:56.343216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:27:56.360201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:27:56.366690 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:27:56.388920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:27:56.405900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:27:56.422695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:27:56.434199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:27:56.451345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:27:56.482233 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:27:56.489835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:27:56.501754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:27:56.526943 dracut-cmdline[947]: dracut-dracut-053 May 8 00:27:56.526943 dracut-cmdline[947]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 8 00:27:56.515116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:27:56.529033 systemd-resolved[952]: Positive Trust Anchors: May 8 00:27:56.529042 systemd-resolved[952]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:27:56.529076 systemd-resolved[952]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:27:56.543748 systemd-resolved[952]: Defaulting to hostname 'linux'. May 8 00:27:56.545163 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:27:56.675949 kernel: SCSI subsystem initialized May 8 00:27:56.578701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:27:56.691801 kernel: Loading iSCSI transport class v2.0-870. May 8 00:27:56.705056 kernel: iscsi: registered transport (tcp) May 8 00:27:56.732701 kernel: iscsi: registered transport (qla4xxx) May 8 00:27:56.732724 kernel: QLogic iSCSI HBA Driver May 8 00:27:56.775353 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:27:56.800174 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:27:56.846500 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:27:56.846534 kernel: device-mapper: uevent: version 1.0.3 May 8 00:27:56.856147 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:27:56.922057 kernel: raid6: neonx8 gen() 15848 MB/s May 8 00:27:56.948059 kernel: raid6: neonx4 gen() 15884 MB/s May 8 00:27:56.973059 kernel: raid6: neonx2 gen() 13258 MB/s May 8 00:27:56.999059 kernel: raid6: neonx1 gen() 10580 MB/s May 8 00:27:57.024059 kernel: raid6: int64x8 gen() 6802 MB/s May 8 00:27:57.049059 kernel: raid6: int64x4 gen() 7382 MB/s May 8 00:27:57.074059 kernel: raid6: int64x2 gen() 6134 MB/s May 8 00:27:57.102333 kernel: raid6: int64x1 gen() 5077 MB/s May 8 00:27:57.102354 kernel: raid6: using algorithm neonx4 gen() 15884 MB/s May 8 00:27:57.136609 kernel: raid6: .... xor() 12472 MB/s, rmw enabled May 8 00:27:57.136630 kernel: raid6: using neon recovery algorithm May 8 00:27:57.156060 kernel: xor: measuring software checksum speed May 8 00:27:57.167720 kernel: 8regs : 20966 MB/sec May 8 00:27:57.167742 kernel: 32regs : 21699 MB/sec May 8 00:27:57.175534 kernel: arm64_neon : 28022 MB/sec May 8 00:27:57.183483 kernel: xor: using function: arm64_neon (28022 MB/sec) May 8 00:27:57.245058 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:27:57.254432 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:27:57.273186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:27:57.287158 systemd-udevd[1145]: Using default interface naming scheme 'v255'. May 8 00:27:57.290832 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:27:57.311158 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:27:57.325109 dracut-pre-trigger[1158]: rd.md=0: removing MD RAID activation May 8 00:27:57.350531 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:27:57.375172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:27:57.478708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:27:57.508562 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:27:57.508591 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:27:57.527160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:27:57.573521 kernel: ACPI: bus type USB registered May 8 00:27:57.573535 kernel: usbcore: registered new interface driver usbfs May 8 00:27:57.573545 kernel: usbcore: registered new interface driver hub May 8 00:27:57.573555 kernel: usbcore: registered new device driver usb May 8 00:27:57.573565 kernel: PTP clock support registered May 8 00:27:57.569019 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:27:57.725463 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 8 00:27:57.725477 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 8 00:27:57.725486 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 8 00:27:57.780228 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 8 00:27:58.045105 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 8 00:27:58.045277 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 8 00:27:58.045396 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 8 00:27:58.045479 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 May 8 00:27:58.683094 kernel: igb 0003:03:00.0: added PHC on eth0 May 8 00:27:58.683214 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 8 00:27:58.683299 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 8 00:27:58.683375 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 8 00:27:58.683458 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:50 May 8 00:27:58.683534 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 8 00:27:58.683615 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 8 00:27:58.683692 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 8 00:27:58.683773 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 8 00:27:58.683854 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 8 00:27:58.683931 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 8 00:27:58.684006 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 8 00:27:58.684088 kernel: hub 1-0:1.0: USB hub found May 8 00:27:58.684191 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 8 00:27:58.684274 kernel: hub 1-0:1.0: 4 ports detected May 8 00:27:58.684362 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 8 00:27:58.684440 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 8 00:27:58.684574 kernel: hub 2-0:1.0: USB hub found May 8 00:27:58.684669 kernel: hub 2-0:1.0: 4 ports detected May 8 00:27:58.684756 kernel: nvme nvme1: pci function 0005:04:00.0 May 8 00:27:58.684848 kernel: nvme nvme0: pci function 0005:03:00.0 May 8 00:27:58.684929 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 8 00:27:58.685002 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 8 00:27:58.685079 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 8 00:27:58.685152 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 8 00:27:58.685223 kernel: igb 0003:03:00.1: added PHC on eth1 May 8 00:27:58.685306 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 8 00:27:58.685384 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:51 May 8 00:27:58.685461 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 8 00:27:58.685537 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 8 00:27:58.685614 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:27:58.685625 kernel: GPT:9289727 != 1875385007 May 8 00:27:58.685635 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:27:58.685645 kernel: GPT:9289727 != 1875385007 May 8 00:27:58.685654 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:27:58.685667 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:27:58.685678 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 8 00:27:58.685756 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1228) May 8 00:27:58.685767 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (1237) May 8 00:27:58.685777 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 8 00:27:58.685853 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 8 00:27:58.685982 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 8 00:27:58.686071 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:27:58.686081 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:27:58.686091 kernel: hub 1-3:1.0: USB hub found May 8 00:27:58.686190 kernel: hub 1-3:1.0: 4 ports detected May 8 00:27:58.686277 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 8 00:27:58.686403 kernel: hub 2-3:1.0: USB hub found May 8 00:27:58.686503 kernel: hub 2-3:1.0: 4 ports detected May 8 00:27:58.686591 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 8 00:27:58.686671 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 8 00:27:59.371909 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 8 00:27:59.372084 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 8 00:27:59.372168 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 8 00:27:59.372245 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 8 00:27:57.725687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:27:59.404208 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 8 00:27:59.404346 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 8 00:27:57.731189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:27:57.736506 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:27:57.741727 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:27:57.741869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:27:57.747196 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:27:59.450875 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:27:59.450891 disk-uuid[1320]: Primary Header is updated. May 8 00:27:59.450891 disk-uuid[1320]: Secondary Entries is updated. May 8 00:27:59.450891 disk-uuid[1320]: Secondary Header is updated. May 8 00:27:57.761385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:27:59.470478 disk-uuid[1321]: The operation has completed successfully. May 8 00:27:57.773837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:27:57.773984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:27:57.779231 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:27:57.816252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:27:57.822497 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:27:57.831096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:27:57.831180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:27:57.836346 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:27:57.853185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:27:57.862286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:27:57.868031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:27:58.079845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:27:58.294813 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 8 00:27:59.570226 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:27:58.355347 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 8 00:27:58.365777 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 8 00:27:59.589950 sh[1494]: Success May 8 00:27:58.381608 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 8 00:27:58.393281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 8 00:27:58.417207 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:27:59.497292 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:27:59.497373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:27:59.538210 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:27:59.609340 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:27:59.747643 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 8 00:27:59.747659 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:27:59.747669 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:27:59.747680 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:27:59.747693 kernel: BTRFS info (device dm-0): using free space tree May 8 00:27:59.747703 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:27:59.619711 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:27:59.645124 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:27:59.661306 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:27:59.877262 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:27:59.877289 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 00:27:59.877309 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:27:59.877328 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:27:59.877347 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 8 00:27:59.877360 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:27:59.753282 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:27:59.759176 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:27:59.765794 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:27:59.879486 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:27:59.894343 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:27:59.918367 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:27:59.929216 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:27:59.963022 systemd-networkd[1681]: lo: Link UP May 8 00:27:59.963027 systemd-networkd[1681]: lo: Gained carrier May 8 00:27:59.967066 systemd-networkd[1681]: Enumeration completed May 8 00:27:59.967139 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:27:59.968296 systemd-networkd[1681]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:27:59.973734 systemd[1]: Reached target network.target - Network. May 8 00:28:00.009114 ignition[1678]: Ignition 2.20.0 May 8 00:28:00.009120 ignition[1678]: Stage: fetch-offline May 8 00:28:00.009170 ignition[1678]: no configs at "/usr/lib/ignition/base.d" May 8 00:28:00.019618 systemd-networkd[1681]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:28:00.009178 ignition[1678]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:28:00.019765 unknown[1678]: fetched base config from "system" May 8 00:28:00.009404 ignition[1678]: parsed url from cmdline: "" May 8 00:28:00.019772 unknown[1678]: fetched user config from "system" May 8 00:28:00.009407 ignition[1678]: no config URL provided May 8 00:28:00.022458 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:28:00.009412 ignition[1678]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:28:00.030302 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:28:00.009461 ignition[1678]: parsing config with SHA512: d2c53bca1c3716ddeaa431e31d4de2b310a5e29d60b2c3a9185f527d4283679f2e103e9119831bf4a084e877fb4153a13848fed384423f0fb7b3954c73686827 May 8 00:28:00.047205 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:28:00.020201 ignition[1678]: fetch-offline: fetch-offline passed May 8 00:28:00.071952 systemd-networkd[1681]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:28:00.020206 ignition[1678]: POST message to Packet Timeline May 8 00:28:00.020210 ignition[1678]: POST Status error: resource requires networking May 8 00:28:00.020279 ignition[1678]: Ignition finished successfully May 8 00:28:00.063369 ignition[1719]: Ignition 2.20.0 May 8 00:28:00.063376 ignition[1719]: Stage: kargs May 8 00:28:00.063707 ignition[1719]: no configs at "/usr/lib/ignition/base.d" May 8 00:28:00.063715 ignition[1719]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:28:00.065279 ignition[1719]: kargs: kargs passed May 8 00:28:00.065283 ignition[1719]: POST message to Packet Timeline May 8 00:28:00.065370 ignition[1719]: GET https://metadata.packet.net/metadata: attempt #1 May 8 00:28:00.068134 ignition[1719]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52999->[::1]:53: read: connection refused May 8 00:28:00.269183 ignition[1719]: GET https://metadata.packet.net/metadata: attempt #2 May 8 00:28:00.269546 ignition[1719]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50997->[::1]:53: read: connection refused May 8 00:28:00.662061 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 8 00:28:00.664534 systemd-networkd[1681]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:28:00.670308 ignition[1719]: GET https://metadata.packet.net/metadata: attempt #3 May 8 00:28:00.671203 ignition[1719]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54333->[::1]:53: read: connection refused May 8 00:28:01.267062 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 8 00:28:01.270262 systemd-networkd[1681]: eno1: Link UP May 8 00:28:01.270386 systemd-networkd[1681]: eno2: Link UP May 8 00:28:01.270498 systemd-networkd[1681]: enP1p1s0f0np0: Link UP May 8 00:28:01.270628 systemd-networkd[1681]: enP1p1s0f0np0: Gained carrier May 8 00:28:01.281253 systemd-networkd[1681]: enP1p1s0f1np1: Link UP May 8 00:28:01.307091 systemd-networkd[1681]: enP1p1s0f0np0: DHCPv4 address 147.28.129.25/31, gateway 147.28.129.24 acquired from 147.28.144.140 May 8 00:28:01.471514 ignition[1719]: GET https://metadata.packet.net/metadata: attempt #4 May 8 00:28:01.471958 ignition[1719]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46590->[::1]:53: read: connection refused May 8 00:28:01.665231 systemd-networkd[1681]: enP1p1s0f1np1: Gained carrier May 8 00:28:02.273294 systemd-networkd[1681]: enP1p1s0f0np0: Gained IPv6LL May 8 00:28:03.072852 ignition[1719]: GET https://metadata.packet.net/metadata: attempt #5 May 8 00:28:03.073231 ignition[1719]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39212->[::1]:53: read: connection refused May 8 00:28:03.681294 systemd-networkd[1681]: enP1p1s0f1np1: Gained IPv6LL May 8 00:28:06.275713 ignition[1719]: GET https://metadata.packet.net/metadata: attempt #6 May 8 00:28:06.835458 ignition[1719]: GET result: OK May 8 00:28:07.196780 ignition[1719]: Ignition finished successfully May 8 00:28:07.200462 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:28:07.218162 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:28:07.229831 ignition[1742]: Ignition 2.20.0 May 8 00:28:07.229838 ignition[1742]: Stage: disks May 8 00:28:07.230082 ignition[1742]: no configs at "/usr/lib/ignition/base.d" May 8 00:28:07.230092 ignition[1742]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:28:07.231726 ignition[1742]: disks: disks passed May 8 00:28:07.231730 ignition[1742]: POST message to Packet Timeline May 8 00:28:07.231774 ignition[1742]: GET https://metadata.packet.net/metadata: attempt #1 May 8 00:28:07.719945 ignition[1742]: GET result: OK May 8 00:28:08.003020 ignition[1742]: Ignition finished successfully May 8 00:28:08.005154 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:28:08.011510 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:28:08.018918 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:28:08.026733 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:28:08.035104 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:28:08.043893 systemd[1]: Reached target basic.target - Basic System. May 8 00:28:08.064145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:28:08.079819 systemd-fsck[1759]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:28:08.082994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:28:08.100122 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:28:08.169055 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 8 00:28:08.169260 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:28:08.179509 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:28:08.201125 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:28:08.210055 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1770) May 8 00:28:08.210072 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:28:08.210082 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 00:28:08.210097 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:28:08.211063 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:28:08.211075 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 8 00:28:08.303106 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:28:08.309457 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 00:28:08.319998 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 8 00:28:08.335575 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:28:08.335611 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:28:08.348924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:28:08.378950 coreos-metadata[1792]: May 08 00:28:08.365 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:28:08.394279 coreos-metadata[1787]: May 08 00:28:08.365 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:28:08.362480 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:28:08.383260 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:28:08.422246 initrd-setup-root[1810]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:28:08.428174 initrd-setup-root[1818]: cut: /sysroot/etc/group: No such file or directory May 8 00:28:08.433994 initrd-setup-root[1826]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:28:08.439804 initrd-setup-root[1834]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:28:08.507468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:28:08.532119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:28:08.563003 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:28:08.538511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:28:08.570010 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:28:08.586681 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:28:08.592010 ignition[1908]: INFO : Ignition 2.20.0 May 8 00:28:08.592010 ignition[1908]: INFO : Stage: mount May 8 00:28:08.592010 ignition[1908]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:28:08.592010 ignition[1908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:28:08.623026 ignition[1908]: INFO : mount: mount passed May 8 00:28:08.623026 ignition[1908]: INFO : POST message to Packet Timeline May 8 00:28:08.623026 ignition[1908]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 00:28:08.834972 coreos-metadata[1787]: May 08 00:28:08.834 INFO Fetch successful May 8 00:28:08.882188 coreos-metadata[1787]: May 08 00:28:08.882 INFO wrote hostname ci-4230.1.1-n-e3459bc746 to /sysroot/etc/hostname May 8 00:28:08.885312 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:28:09.113546 ignition[1908]: INFO : GET result: OK May 8 00:28:09.366246 coreos-metadata[1792]: May 08 00:28:09.366 INFO Fetch successful May 8 00:28:09.415380 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 8 00:28:09.415466 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 8 00:28:09.830974 ignition[1908]: INFO : Ignition finished successfully May 8 00:28:09.833158 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:28:09.850127 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:28:09.861252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:28:09.897202 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1933) May 8 00:28:09.897238 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:28:09.911499 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 00:28:09.924459 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:28:09.947330 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:28:09.947352 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 8 00:28:09.955400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:28:09.986598 ignition[1950]: INFO : Ignition 2.20.0 May 8 00:28:09.986598 ignition[1950]: INFO : Stage: files May 8 00:28:09.996237 ignition[1950]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:28:09.996237 ignition[1950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:28:09.996237 ignition[1950]: DEBUG : files: compiled without relabeling support, skipping May 8 00:28:09.996237 ignition[1950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:28:09.996237 ignition[1950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:28:09.996237 ignition[1950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:28:09.996237 ignition[1950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:28:09.996237 ignition[1950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:28:09.996237 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 00:28:09.996237 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 8 00:28:09.992696 unknown[1950]: wrote ssh authorized keys file for user: core May 8 00:28:10.088149 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:28:10.155859 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:28:10.166256 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 8 00:28:10.345557 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:28:10.713519 ignition[1950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:28:10.713519 ignition[1950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:28:10.737961 ignition[1950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:28:10.737961 ignition[1950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:28:10.737961 ignition[1950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:28:10.737961 ignition[1950]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 8 00:28:10.737961 ignition[1950]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:28:10.737961 ignition[1950]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:28:10.737961 ignition[1950]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:28:10.737961 ignition[1950]: INFO : files: files passed May 8 00:28:10.737961 ignition[1950]: INFO : POST message to Packet Timeline May 8 00:28:10.737961 ignition[1950]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 00:28:11.242282 ignition[1950]: INFO : GET result: OK May 8 00:28:11.575662 ignition[1950]: INFO : Ignition finished successfully May 8 00:28:11.578531 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:28:11.598228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:28:11.610594 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:28:11.628824 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:28:11.628971 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:28:11.647058 initrd-setup-root-after-ignition[1990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:28:11.647058 initrd-setup-root-after-ignition[1990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:28:11.641569 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:28:11.698489 initrd-setup-root-after-ignition[1994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:28:11.654374 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:28:11.678224 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:28:11.712340 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:28:11.712410 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:28:11.722156 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:28:11.738122 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:28:11.749210 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:28:11.759168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:28:11.781574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:28:11.807208 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:28:11.821723 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:28:11.830507 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:28:11.841718 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:28:11.852956 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:28:11.853078 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:28:11.864324 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:28:11.875306 systemd[1]: Stopped target basic.target - Basic System. May 8 00:28:11.886455 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:28:11.897588 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:28:11.908548 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:28:11.919569 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:28:11.930584 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:28:11.941581 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:28:11.952637 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:28:11.968968 systemd[1]: Stopped target swap.target - Swaps. May 8 00:28:11.980025 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:28:11.980137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:28:11.991352 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:28:12.002238 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:28:12.013334 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:28:12.013404 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:28:12.024458 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:28:12.024571 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:28:12.035742 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:28:12.035844 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:28:12.046916 systemd[1]: Stopped target paths.target - Path Units. May 8 00:28:12.057967 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:28:12.058067 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:28:12.074858 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:28:12.086219 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:28:12.097485 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:28:12.097565 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:28:12.108881 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:28:12.200929 ignition[2017]: INFO : Ignition 2.20.0 May 8 00:28:12.200929 ignition[2017]: INFO : Stage: umount May 8 00:28:12.200929 ignition[2017]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:28:12.200929 ignition[2017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:28:12.200929 ignition[2017]: INFO : umount: umount passed May 8 00:28:12.200929 ignition[2017]: INFO : POST message to Packet Timeline May 8 00:28:12.200929 ignition[2017]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 00:28:12.108953 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:28:12.120400 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:28:12.120491 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:28:12.131821 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:28:12.131905 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:28:12.143237 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 00:28:12.143341 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:28:12.176249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:28:12.183918 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:28:12.195092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:28:12.195193 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:28:12.206848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:28:12.206932 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:28:12.220522 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:28:12.223115 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:28:12.223190 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:28:12.259110 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:28:12.261072 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:28:12.733356 ignition[2017]: INFO : GET result: OK May 8 00:28:13.252012 ignition[2017]: INFO : Ignition finished successfully May 8 00:28:13.254970 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:28:13.255225 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:28:13.262351 systemd[1]: Stopped target network.target - Network. May 8 00:28:13.271319 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:28:13.271377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:28:13.280725 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:28:13.280778 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:28:13.290224 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:28:13.290304 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:28:13.299846 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:28:13.299876 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:28:13.309500 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:28:13.309553 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:28:13.319361 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:28:13.328854 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:28:13.338803 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:28:13.338899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:28:13.352362 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:28:13.352617 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:28:13.352728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:28:13.359834 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:28:13.361668 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:28:13.361822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:28:13.381221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:28:13.388773 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:28:13.388820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:28:13.398770 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:28:13.398803 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:28:13.408661 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:28:13.408713 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:28:13.418617 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:28:13.418647 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:28:13.434113 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:28:13.445720 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:28:13.445802 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:28:13.456267 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:28:13.456397 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:28:13.467996 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:28:13.468155 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:28:13.477294 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:28:13.477345 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:28:13.488283 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:28:13.488320 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:28:13.499516 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:28:13.499554 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:28:13.515555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:28:13.515622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:28:13.533163 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:28:13.543944 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:28:13.543987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:28:13.555391 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:28:13.555428 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:28:13.566602 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:28:13.566632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:28:13.578172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:28:13.578205 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:28:13.597437 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:28:13.597517 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:28:13.597832 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:28:13.597900 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:28:14.110302 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:28:14.111148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:28:14.121513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:28:14.142238 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:28:14.155790 systemd[1]: Switching root. May 8 00:28:14.212194 systemd-journald[898]: Journal stopped May 8 00:28:16.274258 systemd-journald[898]: Received SIGTERM from PID 1 (systemd). May 8 00:28:16.274286 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:28:16.274297 kernel: SELinux: policy capability open_perms=1 May 8 00:28:16.274305 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:28:16.274312 kernel: SELinux: policy capability always_check_network=0 May 8 00:28:16.274319 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:28:16.274327 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:28:16.274336 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:28:16.274344 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:28:16.274352 kernel: audit: type=1403 audit(1746664094.382:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:28:16.274360 systemd[1]: Successfully loaded SELinux policy in 115.602ms. May 8 00:28:16.274370 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.856ms. May 8 00:28:16.274379 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:28:16.274388 systemd[1]: Detected architecture arm64. May 8 00:28:16.274398 systemd[1]: Detected first boot. May 8 00:28:16.274407 systemd[1]: Hostname set to . May 8 00:28:16.274416 systemd[1]: Initializing machine ID from random generator. May 8 00:28:16.274424 zram_generator::config[2097]: No configuration found. May 8 00:28:16.274435 systemd[1]: Populated /etc with preset unit settings. May 8 00:28:16.274444 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:28:16.274453 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:28:16.274462 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:28:16.274470 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:28:16.274479 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:28:16.274488 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:28:16.274498 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:28:16.274507 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:28:16.274516 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:28:16.274525 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:28:16.274534 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:28:16.274543 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:28:16.274552 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:28:16.274563 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:28:16.274573 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:28:16.274582 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:28:16.274591 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:28:16.274600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:28:16.274609 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:28:16.274618 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:28:16.274627 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:28:16.274638 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:28:16.274647 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:28:16.274658 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:28:16.274667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:28:16.274676 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:28:16.274685 systemd[1]: Reached target slices.target - Slice Units. May 8 00:28:16.274694 systemd[1]: Reached target swap.target - Swaps. May 8 00:28:16.274703 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:28:16.274712 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:28:16.274722 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:28:16.274731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:28:16.274741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:28:16.274750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:28:16.274759 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:28:16.274769 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:28:16.274778 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:28:16.274787 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:28:16.274797 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:28:16.274806 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:28:16.274815 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:28:16.274824 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:28:16.274833 systemd[1]: Reached target machines.target - Containers. May 8 00:28:16.274844 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:28:16.274853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:28:16.274862 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:28:16.274871 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:28:16.274880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:28:16.274889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:28:16.274898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:28:16.274907 kernel: ACPI: bus type drm_connector registered May 8 00:28:16.274916 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:28:16.274926 kernel: fuse: init (API version 7.39) May 8 00:28:16.274934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:28:16.274943 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:28:16.274954 kernel: loop: module loaded May 8 00:28:16.274962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:28:16.274971 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:28:16.274980 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:28:16.274989 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:28:16.275000 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:28:16.275010 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:28:16.275019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:28:16.275045 systemd-journald[2211]: Collecting audit messages is disabled. May 8 00:28:16.275070 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:28:16.275080 systemd-journald[2211]: Journal started May 8 00:28:16.275099 systemd-journald[2211]: Runtime Journal (/run/log/journal/7eaed528d382491d84314fc8daada287) is 8M, max 4G, 3.9G free. May 8 00:28:14.941444 systemd[1]: Queued start job for default target multi-user.target. May 8 00:28:14.956389 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 8 00:28:14.956708 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:28:14.957012 systemd[1]: systemd-journald.service: Consumed 3.396s CPU time. May 8 00:28:16.326065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:28:16.353064 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:28:16.374063 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:28:16.397028 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:28:16.397080 systemd[1]: Stopped verity-setup.service. May 8 00:28:16.422062 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:28:16.427974 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:28:16.433450 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:28:16.438925 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:28:16.444296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:28:16.449652 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:28:16.454895 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:28:16.460333 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:28:16.465733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:28:16.472279 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:28:16.472449 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:28:16.477736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:28:16.477893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:28:16.485168 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:28:16.485392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:28:16.490772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:28:16.490930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:28:16.496192 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:28:16.496360 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:28:16.501659 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:28:16.501823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:28:16.506858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:28:16.512079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:28:16.517219 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:28:16.522245 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:28:16.527446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:28:16.543243 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:28:16.560215 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:28:16.566025 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:28:16.570737 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:28:16.570767 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:28:16.576194 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:28:16.592223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:28:16.597775 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:28:16.602430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:28:16.603960 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:28:16.609523 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:28:16.614607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:28:16.615752 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:28:16.620654 systemd-journald[2211]: Time spent on flushing to /var/log/journal/7eaed528d382491d84314fc8daada287 is 24.321ms for 2362 entries. May 8 00:28:16.620654 systemd-journald[2211]: System Journal (/var/log/journal/7eaed528d382491d84314fc8daada287) is 8M, max 195.6M, 187.6M free. May 8 00:28:16.662109 systemd-journald[2211]: Received client request to flush runtime journal. May 8 00:28:16.662154 kernel: loop0: detected capacity change from 0 to 123192 May 8 00:28:16.620603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:28:16.622792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:28:16.638629 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:28:16.644320 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:28:16.650022 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:28:16.667018 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:28:16.670017 systemd-tmpfiles[2254]: ACLs are not supported, ignoring. May 8 00:28:16.670029 systemd-tmpfiles[2254]: ACLs are not supported, ignoring. May 8 00:28:16.676343 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:28:16.680283 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:28:16.684812 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:28:16.690624 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:28:16.695275 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:28:16.700128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:28:16.704943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:28:16.715497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:28:16.732061 kernel: loop1: detected capacity change from 0 to 113512 May 8 00:28:16.737366 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:28:16.743474 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:28:16.749534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:28:16.751189 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:28:16.757922 udevadm[2256]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:28:16.769884 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:28:16.787197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:28:16.795057 kernel: loop2: detected capacity change from 0 to 201592 May 8 00:28:16.805458 systemd-tmpfiles[2288]: ACLs are not supported, ignoring. May 8 00:28:16.805471 systemd-tmpfiles[2288]: ACLs are not supported, ignoring. May 8 00:28:16.809215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:28:16.852061 kernel: loop3: detected capacity change from 0 to 8 May 8 00:28:16.862855 ldconfig[2243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:28:16.864549 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:28:16.894065 kernel: loop4: detected capacity change from 0 to 123192 May 8 00:28:16.910064 kernel: loop5: detected capacity change from 0 to 113512 May 8 00:28:16.925061 kernel: loop6: detected capacity change from 0 to 201592 May 8 00:28:16.943062 kernel: loop7: detected capacity change from 0 to 8 May 8 00:28:16.943621 (sd-merge)[2298]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 8 00:28:16.944062 (sd-merge)[2298]: Merged extensions into '/usr'. May 8 00:28:16.947020 systemd[1]: Reload requested from client PID 2253 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:28:16.947032 systemd[1]: Reloading... May 8 00:28:16.996061 zram_generator::config[2330]: No configuration found. May 8 00:28:17.088114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:28:17.149571 systemd[1]: Reloading finished in 202 ms. May 8 00:28:17.165472 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:28:17.170331 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:28:17.190534 systemd[1]: Starting ensure-sysext.service... May 8 00:28:17.196229 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:28:17.202846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:28:17.213598 systemd[1]: Reload requested from client PID 2381 ('systemctl') (unit ensure-sysext.service)... May 8 00:28:17.213608 systemd[1]: Reloading... May 8 00:28:17.215969 systemd-tmpfiles[2382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:28:17.216166 systemd-tmpfiles[2382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:28:17.216813 systemd-tmpfiles[2382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:28:17.217013 systemd-tmpfiles[2382]: ACLs are not supported, ignoring. May 8 00:28:17.217063 systemd-tmpfiles[2382]: ACLs are not supported, ignoring. May 8 00:28:17.219689 systemd-tmpfiles[2382]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:28:17.219696 systemd-tmpfiles[2382]: Skipping /boot May 8 00:28:17.228140 systemd-tmpfiles[2382]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:28:17.228147 systemd-tmpfiles[2382]: Skipping /boot May 8 00:28:17.231120 systemd-udevd[2383]: Using default interface naming scheme 'v255'. May 8 00:28:17.256059 zram_generator::config[2415]: No configuration found. May 8 00:28:17.289073 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (2446) May 8 00:28:17.314063 kernel: IPMI message handler: version 39.2 May 8 00:28:17.323062 kernel: ipmi device interface May 8 00:28:17.335057 kernel: ipmi_ssif: IPMI SSIF Interface driver May 8 00:28:17.335147 kernel: ipmi_si: IPMI System Interface driver May 8 00:28:17.348146 kernel: ipmi_si: Unable to find any System Interface(s) May 8 00:28:17.362170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:28:17.443283 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:28:17.443325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 8 00:28:17.447915 systemd[1]: Reloading finished in 234 ms. May 8 00:28:17.460551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:28:17.481929 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:28:17.504490 systemd[1]: Finished ensure-sysext.service. May 8 00:28:17.509192 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:28:17.549211 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:28:17.555327 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:28:17.560447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:28:17.561492 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:28:17.567443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:28:17.573302 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:28:17.578823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:28:17.579139 lvm[2601]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:28:17.584405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:28:17.589178 augenrules[2623]: No rules May 8 00:28:17.589233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:28:17.590114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:28:17.594698 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:28:17.595894 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:28:17.602220 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:28:17.608568 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:28:17.614763 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:28:17.620289 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:28:17.625801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:28:17.631207 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:28:17.631971 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:28:17.637041 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:28:17.643667 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:28:17.648475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:28:17.648629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:28:17.653561 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:28:17.653722 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:28:17.658433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:28:17.658580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:28:17.663410 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:28:17.663558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:28:17.668293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:28:17.673674 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:28:17.678386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:28:17.691686 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:28:17.714221 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:28:17.718105 lvm[2653]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:28:17.718693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:28:17.718762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:28:17.720019 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:28:17.726509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:28:17.731072 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:28:17.731569 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:28:17.736378 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:28:17.754465 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:28:17.761487 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:28:17.821755 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:28:17.826765 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:28:17.830206 systemd-resolved[2632]: Positive Trust Anchors: May 8 00:28:17.830219 systemd-resolved[2632]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:28:17.830251 systemd-resolved[2632]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:28:17.833959 systemd-resolved[2632]: Using system hostname 'ci-4230.1.1-n-e3459bc746'. May 8 00:28:17.835268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:28:17.835912 systemd-networkd[2631]: lo: Link UP May 8 00:28:17.835917 systemd-networkd[2631]: lo: Gained carrier May 8 00:28:17.839569 systemd-networkd[2631]: bond0: netdev ready May 8 00:28:17.839617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:28:17.843917 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:28:17.848137 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:28:17.848630 systemd-networkd[2631]: Enumeration completed May 8 00:28:17.852340 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:28:17.856737 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:28:17.860320 systemd-networkd[2631]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:81:80.network. May 8 00:28:17.861012 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:28:17.865295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:28:17.869634 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:28:17.869654 systemd[1]: Reached target paths.target - Path Units. May 8 00:28:17.873921 systemd[1]: Reached target timers.target - Timer Units. May 8 00:28:17.879059 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:28:17.884804 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:28:17.891073 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:28:17.897936 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:28:17.902818 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:28:17.907798 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:28:17.912409 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:28:17.916866 systemd[1]: Reached target network.target - Network. May 8 00:28:17.921189 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:28:17.925439 systemd[1]: Reached target basic.target - Basic System. May 8 00:28:17.929662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:28:17.929681 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:28:17.942151 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:28:17.947699 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:28:17.953255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:28:17.958858 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:28:17.964462 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:28:17.968657 jq[2688]: false May 8 00:28:17.968821 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:28:17.969409 coreos-metadata[2684]: May 08 00:28:17.969 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:28:17.969980 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:28:17.972297 coreos-metadata[2684]: May 08 00:28:17.972 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:28:17.974665 dbus-daemon[2685]: [system] SELinux support is enabled May 8 00:28:17.975531 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:28:17.981130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:28:17.984157 extend-filesystems[2689]: Found loop4 May 8 00:28:17.990202 extend-filesystems[2689]: Found loop5 May 8 00:28:17.990202 extend-filesystems[2689]: Found loop6 May 8 00:28:17.990202 extend-filesystems[2689]: Found loop7 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p1 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p2 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p3 May 8 00:28:17.990202 extend-filesystems[2689]: Found usr May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p4 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p6 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p7 May 8 00:28:17.990202 extend-filesystems[2689]: Found nvme0n1p9 May 8 00:28:17.990202 extend-filesystems[2689]: Checking size of /dev/nvme0n1p9 May 8 00:28:18.128160 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 8 00:28:18.128184 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (2465) May 8 00:28:17.986906 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:28:18.128248 extend-filesystems[2689]: Resized partition /dev/nvme0n1p9 May 8 00:28:18.126507 dbus-daemon[2685]: [system] Successfully activated service 'org.freedesktop.systemd1' May 8 00:28:17.998808 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:28:18.133149 extend-filesystems[2711]: resize2fs 1.47.1 (20-May-2024) May 8 00:28:18.004955 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:28:18.045094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:28:18.054232 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:28:18.054827 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:28:18.138210 update_engine[2720]: I20250508 00:28:18.103416 2720 main.cc:92] Flatcar Update Engine starting May 8 00:28:18.138210 update_engine[2720]: I20250508 00:28:18.106238 2720 update_check_scheduler.cc:74] Next update check in 4m52s May 8 00:28:18.055530 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:28:18.138475 jq[2721]: true May 8 00:28:18.063616 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:28:18.072396 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:28:18.138742 tar[2724]: linux-arm64/LICENSE May 8 00:28:18.138742 tar[2724]: linux-arm64/helm May 8 00:28:18.080872 systemd-logind[2709]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:28:18.139093 jq[2725]: true May 8 00:28:18.081332 systemd-logind[2709]: New seat seat0. May 8 00:28:18.088025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:28:18.090082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:28:18.090474 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:28:18.090732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:28:18.096294 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:28:18.105806 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:28:18.106292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:28:18.124344 (ntainerd)[2726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:28:18.145164 systemd[1]: Started update-engine.service - Update Engine. May 8 00:28:18.151340 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:28:18.151495 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:28:18.156280 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:28:18.156387 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:28:18.162856 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:28:18.165911 bash[2750]: Updated "/home/core/.ssh/authorized_keys" May 8 00:28:18.171729 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:28:18.179061 systemd[1]: Starting sshkeys.service... May 8 00:28:18.191816 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:28:18.191878 locksmithd[2752]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:28:18.197896 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:28:18.217543 coreos-metadata[2768]: May 08 00:28:18.217 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:28:18.218592 coreos-metadata[2768]: May 08 00:28:18.218 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:28:18.261531 containerd[2726]: time="2025-05-08T00:28:18.261443320Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:28:18.283272 containerd[2726]: time="2025-05-08T00:28:18.283230720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.285018 containerd[2726]: time="2025-05-08T00:28:18.284981680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:28:18.285066 containerd[2726]: time="2025-05-08T00:28:18.285017960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:28:18.285066 containerd[2726]: time="2025-05-08T00:28:18.285036720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:28:18.285238 containerd[2726]: time="2025-05-08T00:28:18.285220240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:28:18.285265 containerd[2726]: time="2025-05-08T00:28:18.285241360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.285314 containerd[2726]: time="2025-05-08T00:28:18.285298440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:28:18.285314 containerd[2726]: time="2025-05-08T00:28:18.285311680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.285533 containerd[2726]: time="2025-05-08T00:28:18.285515280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:28:18.285553 containerd[2726]: time="2025-05-08T00:28:18.285532440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.285553 containerd[2726]: time="2025-05-08T00:28:18.285545440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:28:18.285592 containerd[2726]: time="2025-05-08T00:28:18.285556160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.285646 containerd[2726]: time="2025-05-08T00:28:18.285631360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.285939 containerd[2726]: time="2025-05-08T00:28:18.285923560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:28:18.286087 containerd[2726]: time="2025-05-08T00:28:18.286070200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:28:18.286108 containerd[2726]: time="2025-05-08T00:28:18.286087360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:28:18.286185 containerd[2726]: time="2025-05-08T00:28:18.286171480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:28:18.286226 containerd[2726]: time="2025-05-08T00:28:18.286214440Z" level=info msg="metadata content store policy set" policy=shared May 8 00:28:18.293552 containerd[2726]: time="2025-05-08T00:28:18.293529960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:28:18.293590 containerd[2726]: time="2025-05-08T00:28:18.293570520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:28:18.293590 containerd[2726]: time="2025-05-08T00:28:18.293586760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:28:18.293663 containerd[2726]: time="2025-05-08T00:28:18.293607000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:28:18.293663 containerd[2726]: time="2025-05-08T00:28:18.293625920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:28:18.293781 containerd[2726]: time="2025-05-08T00:28:18.293765920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:28:18.293988 containerd[2726]: time="2025-05-08T00:28:18.293976240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:28:18.294106 containerd[2726]: time="2025-05-08T00:28:18.294092440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:28:18.294128 containerd[2726]: time="2025-05-08T00:28:18.294111440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:28:18.294146 containerd[2726]: time="2025-05-08T00:28:18.294131760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:28:18.294163 containerd[2726]: time="2025-05-08T00:28:18.294146040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294180 containerd[2726]: time="2025-05-08T00:28:18.294160200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294180 containerd[2726]: time="2025-05-08T00:28:18.294173040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294211 containerd[2726]: time="2025-05-08T00:28:18.294187120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294211 containerd[2726]: time="2025-05-08T00:28:18.294201440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294245 containerd[2726]: time="2025-05-08T00:28:18.294213000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294245 containerd[2726]: time="2025-05-08T00:28:18.294227160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294245 containerd[2726]: time="2025-05-08T00:28:18.294239640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:28:18.294290 containerd[2726]: time="2025-05-08T00:28:18.294263120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294290 containerd[2726]: time="2025-05-08T00:28:18.294278360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294325 containerd[2726]: time="2025-05-08T00:28:18.294290440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294325 containerd[2726]: time="2025-05-08T00:28:18.294303480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294325 containerd[2726]: time="2025-05-08T00:28:18.294314680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294375 containerd[2726]: time="2025-05-08T00:28:18.294326320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294375 containerd[2726]: time="2025-05-08T00:28:18.294337600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294375 containerd[2726]: time="2025-05-08T00:28:18.294353880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294375 containerd[2726]: time="2025-05-08T00:28:18.294366080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294443 containerd[2726]: time="2025-05-08T00:28:18.294379720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294443 containerd[2726]: time="2025-05-08T00:28:18.294391840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294443 containerd[2726]: time="2025-05-08T00:28:18.294404840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294443 containerd[2726]: time="2025-05-08T00:28:18.294416600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294443 containerd[2726]: time="2025-05-08T00:28:18.294430760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:28:18.294522 containerd[2726]: time="2025-05-08T00:28:18.294450680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294522 containerd[2726]: time="2025-05-08T00:28:18.294464280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294522 containerd[2726]: time="2025-05-08T00:28:18.294475480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:28:18.294659 containerd[2726]: time="2025-05-08T00:28:18.294649640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:28:18.294677 containerd[2726]: time="2025-05-08T00:28:18.294667880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:28:18.294696 containerd[2726]: time="2025-05-08T00:28:18.294677680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:28:18.294696 containerd[2726]: time="2025-05-08T00:28:18.294690360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:28:18.294731 containerd[2726]: time="2025-05-08T00:28:18.294699800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:28:18.294731 containerd[2726]: time="2025-05-08T00:28:18.294715520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:28:18.294731 containerd[2726]: time="2025-05-08T00:28:18.294728360Z" level=info msg="NRI interface is disabled by configuration." May 8 00:28:18.294778 containerd[2726]: time="2025-05-08T00:28:18.294740960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:28:18.295118 containerd[2726]: time="2025-05-08T00:28:18.295080520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:28:18.295218 containerd[2726]: time="2025-05-08T00:28:18.295128800Z" level=info msg="Connect containerd service" May 8 00:28:18.295218 containerd[2726]: time="2025-05-08T00:28:18.295159200Z" level=info msg="using legacy CRI server" May 8 00:28:18.295218 containerd[2726]: time="2025-05-08T00:28:18.295165280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:28:18.295432 containerd[2726]: time="2025-05-08T00:28:18.295417320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:28:18.296071 containerd[2726]: time="2025-05-08T00:28:18.296042400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:28:18.296278 containerd[2726]: time="2025-05-08T00:28:18.296239800Z" level=info msg="Start subscribing containerd event" May 8 00:28:18.296307 containerd[2726]: time="2025-05-08T00:28:18.296295040Z" level=info msg="Start recovering state" May 8 00:28:18.296376 containerd[2726]: time="2025-05-08T00:28:18.296363880Z" level=info msg="Start event monitor" May 8 00:28:18.296395 containerd[2726]: time="2025-05-08T00:28:18.296376080Z" level=info msg="Start snapshots syncer" May 8 00:28:18.296395 containerd[2726]: time="2025-05-08T00:28:18.296385680Z" level=info msg="Start cni network conf syncer for default" May 8 00:28:18.296395 containerd[2726]: time="2025-05-08T00:28:18.296394320Z" level=info msg="Start streaming server" May 8 00:28:18.296533 containerd[2726]: time="2025-05-08T00:28:18.296517320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:28:18.296593 containerd[2726]: time="2025-05-08T00:28:18.296558680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:28:18.296613 containerd[2726]: time="2025-05-08T00:28:18.296605680Z" level=info msg="containerd successfully booted in 0.036018s" May 8 00:28:18.296656 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:28:18.462570 tar[2724]: linux-arm64/README.md May 8 00:28:18.479085 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:28:18.546063 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 8 00:28:18.561374 extend-filesystems[2711]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 8 00:28:18.561374 extend-filesystems[2711]: old_desc_blocks = 1, new_desc_blocks = 112 May 8 00:28:18.561374 extend-filesystems[2711]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 8 00:28:18.592169 extend-filesystems[2689]: Resized filesystem in /dev/nvme0n1p9 May 8 00:28:18.592169 extend-filesystems[2689]: Found nvme1n1 May 8 00:28:18.563934 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:28:18.564305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:28:18.578494 systemd[1]: extend-filesystems.service: Consumed 206ms CPU time, 68.9M memory peak. May 8 00:28:18.776008 sshd_keygen[2714]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:28:18.794490 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:28:18.814402 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:28:18.826319 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:28:18.827314 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:28:18.846307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:28:18.854075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:28:18.860435 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:28:18.866445 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:28:18.871632 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:28:18.972418 coreos-metadata[2684]: May 08 00:28:18.972 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 8 00:28:18.972780 coreos-metadata[2684]: May 08 00:28:18.972 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:28:19.173060 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 8 00:28:19.190055 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 8 00:28:19.190890 systemd-networkd[2631]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:81:81.network. May 8 00:28:19.218744 coreos-metadata[2768]: May 08 00:28:19.218 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 8 00:28:19.219162 coreos-metadata[2768]: May 08 00:28:19.219 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:28:19.803070 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 8 00:28:19.819088 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 8 00:28:19.819218 systemd-networkd[2631]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 8 00:28:19.820726 systemd-networkd[2631]: enP1p1s0f0np0: Link UP May 8 00:28:19.820970 systemd-networkd[2631]: enP1p1s0f0np0: Gained carrier May 8 00:28:19.820993 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:28:19.839064 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 8 00:28:19.846389 systemd-networkd[2631]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:81:80.network. May 8 00:28:19.846668 systemd-networkd[2631]: enP1p1s0f1np1: Link UP May 8 00:28:19.846862 systemd-networkd[2631]: enP1p1s0f1np1: Gained carrier May 8 00:28:19.863254 systemd-networkd[2631]: bond0: Link UP May 8 00:28:19.863495 systemd-networkd[2631]: bond0: Gained carrier May 8 00:28:19.863682 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:19.864282 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:19.864526 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:19.864679 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:19.940342 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 8 00:28:19.940380 kernel: bond0: active interface up! May 8 00:28:20.064054 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 8 00:28:20.972872 coreos-metadata[2684]: May 08 00:28:20.972 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 8 00:28:21.219243 coreos-metadata[2768]: May 08 00:28:21.219 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 8 00:28:21.793634 systemd-networkd[2631]: bond0: Gained IPv6LL May 8 00:28:21.794094 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:21.857862 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:21.857938 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:21.860626 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:28:21.866379 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:28:21.884330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:21.890952 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:28:21.913411 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:28:22.483144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:22.489082 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:28:22.895226 kubelet[2835]: E0508 00:28:22.895149 2835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:28:22.897360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:28:22.897502 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:28:22.897839 systemd[1]: kubelet.service: Consumed 701ms CPU time, 263.1M memory peak. May 8 00:28:23.342787 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:28:23.360371 systemd[1]: Started sshd@0-147.28.129.25:22-218.92.0.158:15370.service - OpenSSH per-connection server daemon (218.92.0.158:15370). May 8 00:28:23.479145 systemd[1]: Started sshd@1-147.28.129.25:22-139.178.68.195:33526.service - OpenSSH per-connection server daemon (139.178.68.195:33526). May 8 00:28:23.526121 coreos-metadata[2684]: May 08 00:28:23.526 INFO Fetch successful May 8 00:28:23.555182 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 8 00:28:23.555446 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 8 00:28:23.593135 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:28:23.599912 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 8 00:28:23.745341 coreos-metadata[2768]: May 08 00:28:23.745 INFO Fetch successful May 8 00:28:23.796127 unknown[2768]: wrote ssh authorized keys file for user: core May 8 00:28:23.824528 update-ssh-keys[2872]: Updated "/home/core/.ssh/authorized_keys" May 8 00:28:23.825775 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:28:23.832125 systemd[1]: Finished sshkeys.service. May 8 00:28:23.895039 login[2813]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 8 00:28:23.895518 login[2812]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:23.897755 sshd[2861]: Accepted publickey for core from 139.178.68.195 port 33526 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:23.899018 sshd-session[2861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:23.904825 systemd-logind[2709]: New session 2 of user core. May 8 00:28:23.905923 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:28:23.916357 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:28:23.918286 systemd-logind[2709]: New session 3 of user core. May 8 00:28:23.924058 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:28:23.926279 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:28:23.940956 (systemd)[2881]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:28:23.942418 systemd-logind[2709]: New session c1 of user core. May 8 00:28:24.047079 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 8 00:28:24.047485 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:28:24.058887 systemd[2881]: Queued start job for default target default.target. May 8 00:28:24.068249 systemd[2881]: Created slice app.slice - User Application Slice. May 8 00:28:24.068275 systemd[2881]: Reached target paths.target - Paths. May 8 00:28:24.068307 systemd[2881]: Reached target timers.target - Timers. May 8 00:28:24.069586 systemd[2881]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:28:24.077879 systemd[2881]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:28:24.077934 systemd[2881]: Reached target sockets.target - Sockets. May 8 00:28:24.077977 systemd[2881]: Reached target basic.target - Basic System. May 8 00:28:24.078005 systemd[2881]: Reached target default.target - Main User Target. May 8 00:28:24.078028 systemd[2881]: Startup finished in 130ms. May 8 00:28:24.078207 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:28:24.079639 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:28:24.080462 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:28:24.080606 systemd[1]: Startup finished in 3.227s (kernel) + 18.909s (initrd) + 9.813s (userspace) = 31.950s. May 8 00:28:24.386822 systemd[1]: Started sshd@2-147.28.129.25:22-139.178.68.195:33540.service - OpenSSH per-connection server daemon (139.178.68.195:33540). May 8 00:28:24.818188 sshd[2905]: Accepted publickey for core from 139.178.68.195 port 33540 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:24.819575 sshd-session[2905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:24.822907 systemd-logind[2709]: New session 4 of user core. May 8 00:28:24.831216 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:28:24.896881 login[2813]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:24.900046 systemd-logind[2709]: New session 1 of user core. May 8 00:28:24.909217 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:28:25.119380 sshd[2907]: Connection closed by 139.178.68.195 port 33540 May 8 00:28:25.119865 sshd-session[2905]: pam_unix(sshd:session): session closed for user core May 8 00:28:25.123493 systemd[1]: sshd@2-147.28.129.25:22-139.178.68.195:33540.service: Deactivated successfully. May 8 00:28:25.125745 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:28:25.126382 systemd-logind[2709]: Session 4 logged out. Waiting for processes to exit. May 8 00:28:25.126936 systemd-logind[2709]: Removed session 4. May 8 00:28:25.192762 systemd[1]: Started sshd@3-147.28.129.25:22-139.178.68.195:33552.service - OpenSSH per-connection server daemon (139.178.68.195:33552). May 8 00:28:25.502740 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:25.625118 sshd[2921]: Accepted publickey for core from 139.178.68.195 port 33552 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:25.626133 sshd-session[2921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:25.629106 systemd-logind[2709]: New session 5 of user core. May 8 00:28:25.646166 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:28:25.922433 sshd[2923]: Connection closed by 139.178.68.195 port 33552 May 8 00:28:25.922888 sshd-session[2921]: pam_unix(sshd:session): session closed for user core May 8 00:28:25.926420 systemd[1]: sshd@3-147.28.129.25:22-139.178.68.195:33552.service: Deactivated successfully. May 8 00:28:25.928213 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:28:25.928751 systemd-logind[2709]: Session 5 logged out. Waiting for processes to exit. May 8 00:28:25.929293 systemd-logind[2709]: Removed session 5. May 8 00:28:25.997869 systemd[1]: Started sshd@4-147.28.129.25:22-139.178.68.195:42074.service - OpenSSH per-connection server daemon (139.178.68.195:42074). May 8 00:28:26.438235 sshd[2929]: Accepted publickey for core from 139.178.68.195 port 42074 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:26.439204 sshd-session[2929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:26.441924 systemd-logind[2709]: New session 6 of user core. May 8 00:28:26.452154 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:28:26.743368 sshd[2931]: Connection closed by 139.178.68.195 port 42074 May 8 00:28:26.743870 sshd-session[2929]: pam_unix(sshd:session): session closed for user core May 8 00:28:26.747397 systemd[1]: sshd@4-147.28.129.25:22-139.178.68.195:42074.service: Deactivated successfully. May 8 00:28:26.749383 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:28:26.749966 systemd-logind[2709]: Session 6 logged out. Waiting for processes to exit. May 8 00:28:26.750516 systemd-logind[2709]: Removed session 6. May 8 00:28:26.815835 systemd[1]: Started sshd@5-147.28.129.25:22-139.178.68.195:42076.service - OpenSSH per-connection server daemon (139.178.68.195:42076). May 8 00:28:27.233143 sshd[2938]: Accepted publickey for core from 139.178.68.195 port 42076 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:27.234214 sshd-session[2938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:27.236992 systemd-logind[2709]: New session 7 of user core. May 8 00:28:27.246230 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:28:27.468548 sudo[2941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:28:27.468819 sudo[2941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:28:27.479831 sudo[2941]: pam_unix(sudo:session): session closed for user root May 8 00:28:27.541264 sshd[2940]: Connection closed by 139.178.68.195 port 42076 May 8 00:28:27.541598 sshd-session[2938]: pam_unix(sshd:session): session closed for user core May 8 00:28:27.544584 systemd[1]: sshd@5-147.28.129.25:22-139.178.68.195:42076.service: Deactivated successfully. May 8 00:28:27.547455 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:28:27.547984 systemd-logind[2709]: Session 7 logged out. Waiting for processes to exit. May 8 00:28:27.548584 systemd-logind[2709]: Removed session 7. May 8 00:28:27.613919 systemd[1]: Started sshd@6-147.28.129.25:22-139.178.68.195:42082.service - OpenSSH per-connection server daemon (139.178.68.195:42082). May 8 00:28:28.039367 sshd[2948]: Accepted publickey for core from 139.178.68.195 port 42082 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:28.040447 sshd-session[2948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:28.043469 systemd-logind[2709]: New session 8 of user core. May 8 00:28:28.052222 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:28:28.272067 sudo[2952]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:28:28.272340 sudo[2952]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:28:28.274781 sudo[2952]: pam_unix(sudo:session): session closed for user root May 8 00:28:28.279029 sudo[2951]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:28:28.279297 sudo[2951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:28:28.300289 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:28:28.322360 augenrules[2974]: No rules May 8 00:28:28.323470 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:28:28.323679 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:28:28.324378 sudo[2951]: pam_unix(sudo:session): session closed for user root May 8 00:28:28.388156 sshd[2950]: Connection closed by 139.178.68.195 port 42082 May 8 00:28:28.388648 sshd-session[2948]: pam_unix(sshd:session): session closed for user core May 8 00:28:28.392435 systemd[1]: sshd@6-147.28.129.25:22-139.178.68.195:42082.service: Deactivated successfully. May 8 00:28:28.394712 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:28:28.395284 systemd-logind[2709]: Session 8 logged out. Waiting for processes to exit. May 8 00:28:28.395853 systemd-logind[2709]: Removed session 8. May 8 00:28:28.457716 systemd[1]: Started sshd@7-147.28.129.25:22-139.178.68.195:42090.service - OpenSSH per-connection server daemon (139.178.68.195:42090). May 8 00:28:28.876911 sshd[2983]: Accepted publickey for core from 139.178.68.195 port 42090 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:28:28.877893 sshd-session[2983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:28:28.880793 systemd-logind[2709]: New session 9 of user core. May 8 00:28:28.898159 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:28:29.107570 sudo[2986]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:28:29.107841 sudo[2986]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:28:29.407253 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:28:29.407376 (dockerd)[3017]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:28:29.618881 dockerd[3017]: time="2025-05-08T00:28:29.618835920Z" level=info msg="Starting up" May 8 00:28:29.679994 dockerd[3017]: time="2025-05-08T00:28:29.679932480Z" level=info msg="Loading containers: start." May 8 00:28:29.827063 kernel: Initializing XFRM netlink socket May 8 00:28:29.844864 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 8 00:28:29.896839 systemd-networkd[2631]: docker0: Link UP May 8 00:28:29.924228 dockerd[3017]: time="2025-05-08T00:28:29.924203200Z" level=info msg="Loading containers: done." May 8 00:28:29.933000 dockerd[3017]: time="2025-05-08T00:28:29.932944000Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:28:29.933091 dockerd[3017]: time="2025-05-08T00:28:29.933016400Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:28:29.933227 dockerd[3017]: time="2025-05-08T00:28:29.933210280Z" level=info msg="Daemon has completed initialization" May 8 00:28:29.948676 systemd-timesyncd[2633]: Contacted time server [2607:b500:410:7700::1]:123 (2.flatcar.pool.ntp.org). May 8 00:28:29.948724 systemd-timesyncd[2633]: Initial clock synchronization to Thu 2025-05-08 00:28:29.675668 UTC. May 8 00:28:29.953152 dockerd[3017]: time="2025-05-08T00:28:29.953029480Z" level=info msg="API listen on /run/docker.sock" May 8 00:28:29.953140 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:28:30.508936 containerd[2726]: time="2025-05-08T00:28:30.508906731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:28:30.670956 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3664215477-merged.mount: Deactivated successfully. May 8 00:28:31.033279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275062174.mount: Deactivated successfully. May 8 00:28:32.838422 containerd[2726]: time="2025-05-08T00:28:32.838383300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:32.838747 containerd[2726]: time="2025-05-08T00:28:32.838394870Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 8 00:28:32.839492 containerd[2726]: time="2025-05-08T00:28:32.839473119Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:32.842362 containerd[2726]: time="2025-05-08T00:28:32.842340051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:32.843466 containerd[2726]: time="2025-05-08T00:28:32.843439337Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.334495135s" May 8 00:28:32.843492 containerd[2726]: time="2025-05-08T00:28:32.843476033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 8 00:28:32.844051 containerd[2726]: time="2025-05-08T00:28:32.844029435Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:28:33.148505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:28:33.161193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:33.259645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:33.262949 (kubelet)[3323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:28:33.293838 kubelet[3323]: E0508 00:28:33.293798 3323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:28:33.296509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:28:33.296643 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:28:33.296987 systemd[1]: kubelet.service: Consumed 131ms CPU time, 115.6M memory peak. May 8 00:28:34.286266 containerd[2726]: time="2025-05-08T00:28:34.286226305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:34.286492 containerd[2726]: time="2025-05-08T00:28:34.286289927Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 8 00:28:34.287220 containerd[2726]: time="2025-05-08T00:28:34.287198160Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:34.290080 containerd[2726]: time="2025-05-08T00:28:34.290058450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:34.291117 containerd[2726]: time="2025-05-08T00:28:34.291089066Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.447026043s" May 8 00:28:34.291167 containerd[2726]: time="2025-05-08T00:28:34.291121563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 8 00:28:34.291490 containerd[2726]: time="2025-05-08T00:28:34.291470524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:28:35.485154 containerd[2726]: time="2025-05-08T00:28:35.485113019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:35.485441 containerd[2726]: time="2025-05-08T00:28:35.485147840Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 8 00:28:35.486239 containerd[2726]: time="2025-05-08T00:28:35.486219086Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:35.489010 containerd[2726]: time="2025-05-08T00:28:35.488986337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:35.490142 containerd[2726]: time="2025-05-08T00:28:35.490104312Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.198596942s" May 8 00:28:35.490189 containerd[2726]: time="2025-05-08T00:28:35.490151079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 8 00:28:35.490462 containerd[2726]: time="2025-05-08T00:28:35.490441351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:28:36.359535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362502175.mount: Deactivated successfully. May 8 00:28:36.732946 containerd[2726]: time="2025-05-08T00:28:36.732838775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:36.732946 containerd[2726]: time="2025-05-08T00:28:36.732876469Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 8 00:28:36.733641 containerd[2726]: time="2025-05-08T00:28:36.733589190Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:36.735391 containerd[2726]: time="2025-05-08T00:28:36.735367783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:36.736081 containerd[2726]: time="2025-05-08T00:28:36.736056478Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.245581067s" May 8 00:28:36.736110 containerd[2726]: time="2025-05-08T00:28:36.736086097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 8 00:28:36.736409 containerd[2726]: time="2025-05-08T00:28:36.736397654Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:28:37.204025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526533044.mount: Deactivated successfully. May 8 00:28:38.567283 containerd[2726]: time="2025-05-08T00:28:38.567236978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:38.567599 containerd[2726]: time="2025-05-08T00:28:38.567298647Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 8 00:28:38.568430 containerd[2726]: time="2025-05-08T00:28:38.568411767Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:38.571573 containerd[2726]: time="2025-05-08T00:28:38.571548334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:38.572891 containerd[2726]: time="2025-05-08T00:28:38.572849505Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.836416831s" May 8 00:28:38.572920 containerd[2726]: time="2025-05-08T00:28:38.572903860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 8 00:28:38.573259 containerd[2726]: time="2025-05-08T00:28:38.573241418Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:28:38.855290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000086138.mount: Deactivated successfully. May 8 00:28:38.855715 containerd[2726]: time="2025-05-08T00:28:38.855661065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:38.855796 containerd[2726]: time="2025-05-08T00:28:38.855719137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 8 00:28:38.856478 containerd[2726]: time="2025-05-08T00:28:38.856448251Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:38.868064 containerd[2726]: time="2025-05-08T00:28:38.868007621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:38.868762 containerd[2726]: time="2025-05-08T00:28:38.868683644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 295.413527ms" May 8 00:28:38.868762 containerd[2726]: time="2025-05-08T00:28:38.868718116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 00:28:38.869202 containerd[2726]: time="2025-05-08T00:28:38.868999934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:28:39.162272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106316690.mount: Deactivated successfully. May 8 00:28:42.071426 containerd[2726]: time="2025-05-08T00:28:42.071384700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:42.071846 containerd[2726]: time="2025-05-08T00:28:42.071419023Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 8 00:28:42.072533 containerd[2726]: time="2025-05-08T00:28:42.072510752Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:42.075658 containerd[2726]: time="2025-05-08T00:28:42.075634743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:28:42.076896 containerd[2726]: time="2025-05-08T00:28:42.076866463Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.207830397s" May 8 00:28:42.076923 containerd[2726]: time="2025-05-08T00:28:42.076903368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 8 00:28:43.500346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:28:43.511489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:43.609713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:43.613055 (kubelet)[3555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:28:43.643423 kubelet[3555]: E0508 00:28:43.643382 3555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:28:43.645418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:28:43.645546 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:28:43.647208 systemd[1]: kubelet.service: Consumed 128ms CPU time, 114.7M memory peak. May 8 00:28:47.279858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:47.279989 systemd[1]: kubelet.service: Consumed 128ms CPU time, 114.7M memory peak. May 8 00:28:47.290422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:47.308625 systemd[1]: Reload requested from client PID 3589 ('systemctl') (unit session-9.scope)... May 8 00:28:47.308635 systemd[1]: Reloading... May 8 00:28:47.384057 zram_generator::config[3640]: No configuration found. May 8 00:28:47.473214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:28:47.565235 systemd[1]: Reloading finished in 256 ms. May 8 00:28:47.608808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:47.611454 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:47.612289 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:28:47.613161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:47.613204 systemd[1]: kubelet.service: Consumed 79ms CPU time, 90.4M memory peak. May 8 00:28:47.614722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:47.712839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:47.716242 (kubelet)[3704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:28:47.758116 kubelet[3704]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:28:47.758116 kubelet[3704]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:28:47.758116 kubelet[3704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:28:47.758404 kubelet[3704]: I0508 00:28:47.758172 3704 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:28:48.221558 kubelet[3704]: I0508 00:28:48.221533 3704 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:28:48.221558 kubelet[3704]: I0508 00:28:48.221557 3704 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:28:48.221816 kubelet[3704]: I0508 00:28:48.221802 3704 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:28:48.256485 kubelet[3704]: E0508 00:28:48.256457 3704 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.129.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 8 00:28:48.258830 kubelet[3704]: I0508 00:28:48.258807 3704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:28:48.264561 kubelet[3704]: E0508 00:28:48.264535 3704 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:28:48.264586 kubelet[3704]: I0508 00:28:48.264561 3704 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:28:48.285119 kubelet[3704]: I0508 00:28:48.285092 3704 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:28:48.285310 kubelet[3704]: I0508 00:28:48.285287 3704 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:28:48.285461 kubelet[3704]: I0508 00:28:48.285309 3704 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-e3459bc746","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:28:48.285542 kubelet[3704]: I0508 00:28:48.285531 3704 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:28:48.285542 kubelet[3704]: I0508 00:28:48.285540 3704 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:28:48.285763 kubelet[3704]: I0508 00:28:48.285750 3704 state_mem.go:36] "Initialized new in-memory state store" May 8 00:28:48.288589 kubelet[3704]: I0508 00:28:48.288572 3704 kubelet.go:446] "Attempting to sync node with API server" May 8 00:28:48.288634 kubelet[3704]: I0508 00:28:48.288612 3704 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:28:48.288634 kubelet[3704]: I0508 00:28:48.288630 3704 kubelet.go:352] "Adding apiserver pod source" May 8 00:28:48.288703 kubelet[3704]: I0508 00:28:48.288640 3704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:28:48.289725 kubelet[3704]: W0508 00:28:48.289686 3704 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.129.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-e3459bc746&limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 8 00:28:48.289754 kubelet[3704]: E0508 00:28:48.289742 3704 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.129.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-e3459bc746&limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 8 00:28:48.290225 kubelet[3704]: W0508 00:28:48.290192 3704 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.129.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 8 00:28:48.290252 kubelet[3704]: E0508 00:28:48.290238 3704 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.129.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 8 00:28:48.291212 kubelet[3704]: I0508 00:28:48.291197 3704 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:28:48.291836 kubelet[3704]: I0508 00:28:48.291824 3704 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:28:48.291948 kubelet[3704]: W0508 00:28:48.291941 3704 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:28:48.292761 kubelet[3704]: I0508 00:28:48.292749 3704 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:28:48.292786 kubelet[3704]: I0508 00:28:48.292779 3704 server.go:1287] "Started kubelet" May 8 00:28:48.292854 kubelet[3704]: I0508 00:28:48.292824 3704 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:28:48.292876 kubelet[3704]: I0508 00:28:48.292833 3704 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:28:48.293094 kubelet[3704]: I0508 00:28:48.293082 3704 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:28:48.293965 kubelet[3704]: I0508 00:28:48.293950 3704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:28:48.294003 kubelet[3704]: I0508 00:28:48.293988 3704 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:28:48.294079 kubelet[3704]: I0508 00:28:48.294055 3704 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:28:48.294128 kubelet[3704]: I0508 00:28:48.294114 3704 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:28:48.294177 kubelet[3704]: E0508 00:28:48.294150 3704 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-e3459bc746\" not found" May 8 00:28:48.294205 kubelet[3704]: I0508 00:28:48.294174 3704 reconciler.go:26] "Reconciler: start to sync state" May 8 00:28:48.294346 kubelet[3704]: E0508 00:28:48.294312 3704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.129.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3459bc746?timeout=10s\": dial tcp 147.28.129.25:6443: connect: connection refused" interval="200ms" May 8 00:28:48.294493 kubelet[3704]: W0508 00:28:48.294462 3704 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 8 00:28:48.294514 kubelet[3704]: E0508 00:28:48.294503 3704 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 8 00:28:48.294556 kubelet[3704]: I0508 00:28:48.294537 3704 factory.go:221] Registration of the systemd container factory successfully May 8 00:28:48.295790 kubelet[3704]: E0508 00:28:48.295755 3704 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:28:48.296562 kubelet[3704]: I0508 00:28:48.296531 3704 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:28:48.297388 kubelet[3704]: I0508 00:28:48.297373 3704 server.go:490] "Adding debug handlers to kubelet server" May 8 00:28:48.297679 kubelet[3704]: E0508 00:28:48.297454 3704 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.129.25:6443/api/v1/namespaces/default/events\": dial tcp 147.28.129.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-e3459bc746.183d65c53cceacbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-e3459bc746,UID:ci-4230.1.1-n-e3459bc746,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-e3459bc746,},FirstTimestamp:2025-05-08 00:28:48.292760765 +0000 UTC m=+0.573847010,LastTimestamp:2025-05-08 00:28:48.292760765 +0000 UTC m=+0.573847010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-e3459bc746,}" May 8 00:28:48.297867 kubelet[3704]: I0508 00:28:48.297850 3704 factory.go:221] Registration of the containerd container factory successfully May 8 00:28:48.308610 kubelet[3704]: I0508 00:28:48.308578 3704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:28:48.309558 kubelet[3704]: I0508 00:28:48.309546 3704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:28:48.309577 kubelet[3704]: I0508 00:28:48.309564 3704 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:28:48.309602 kubelet[3704]: I0508 00:28:48.309582 3704 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:28:48.309602 kubelet[3704]: I0508 00:28:48.309590 3704 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:28:48.309648 kubelet[3704]: E0508 00:28:48.309629 3704 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:28:48.310901 kubelet[3704]: W0508 00:28:48.310860 3704 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.129.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 8 00:28:48.310925 kubelet[3704]: I0508 00:28:48.310904 3704 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:28:48.310925 kubelet[3704]: I0508 00:28:48.310919 3704 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:28:48.310925 kubelet[3704]: E0508 00:28:48.310915 3704 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.129.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 8 00:28:48.310983 kubelet[3704]: I0508 00:28:48.310937 3704 state_mem.go:36] "Initialized new in-memory state store" May 8 00:28:48.311842 kubelet[3704]: I0508 00:28:48.311826 3704 policy_none.go:49] "None policy: Start" May 8 00:28:48.311866 kubelet[3704]: I0508 00:28:48.311844 3704 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:28:48.311866 kubelet[3704]: I0508 00:28:48.311854 3704 state_mem.go:35] "Initializing new in-memory state store" May 8 00:28:48.315298 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:28:48.338201 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:28:48.340645 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:28:48.355756 kubelet[3704]: I0508 00:28:48.355731 3704 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:28:48.355945 kubelet[3704]: I0508 00:28:48.355927 3704 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:28:48.355999 kubelet[3704]: I0508 00:28:48.355941 3704 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:28:48.356114 kubelet[3704]: I0508 00:28:48.356099 3704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:28:48.356514 kubelet[3704]: E0508 00:28:48.356496 3704 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:28:48.356539 kubelet[3704]: E0508 00:28:48.356533 3704 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-e3459bc746\" not found" May 8 00:28:48.418635 systemd[1]: Created slice kubepods-burstable-podcdf6eaf76afebb3de97fccc152f77faa.slice - libcontainer container kubepods-burstable-podcdf6eaf76afebb3de97fccc152f77faa.slice. May 8 00:28:48.433992 kubelet[3704]: E0508 00:28:48.433966 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.434686 systemd[1]: Created slice kubepods-burstable-podd2bab5ffa98678703f803584f727053c.slice - libcontainer container kubepods-burstable-podd2bab5ffa98678703f803584f727053c.slice. May 8 00:28:48.435927 kubelet[3704]: E0508 00:28:48.435908 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.437333 systemd[1]: Created slice kubepods-burstable-pod77a595618055339b6d4c97e32de6dbf0.slice - libcontainer container kubepods-burstable-pod77a595618055339b6d4c97e32de6dbf0.slice. May 8 00:28:48.438474 kubelet[3704]: E0508 00:28:48.438459 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.457936 kubelet[3704]: I0508 00:28:48.457917 3704 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.458312 kubelet[3704]: E0508 00:28:48.458289 3704 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.129.25:6443/api/v1/nodes\": dial tcp 147.28.129.25:6443: connect: connection refused" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.494801 kubelet[3704]: E0508 00:28:48.494707 3704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.129.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3459bc746?timeout=10s\": dial tcp 147.28.129.25:6443: connect: connection refused" interval="400ms" May 8 00:28:48.495850 kubelet[3704]: I0508 00:28:48.495816 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.495879 kubelet[3704]: I0508 00:28:48.495859 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.495900 kubelet[3704]: I0508 00:28:48.495879 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdf6eaf76afebb3de97fccc152f77faa-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" (UID: \"cdf6eaf76afebb3de97fccc152f77faa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.495921 kubelet[3704]: I0508 00:28:48.495898 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdf6eaf76afebb3de97fccc152f77faa-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" (UID: \"cdf6eaf76afebb3de97fccc152f77faa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.495921 kubelet[3704]: I0508 00:28:48.495916 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdf6eaf76afebb3de97fccc152f77faa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" (UID: \"cdf6eaf76afebb3de97fccc152f77faa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.495962 kubelet[3704]: I0508 00:28:48.495932 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.495962 kubelet[3704]: I0508 00:28:48.495948 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.496000 kubelet[3704]: I0508 00:28:48.495976 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.496000 kubelet[3704]: I0508 00:28:48.495994 3704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77a595618055339b6d4c97e32de6dbf0-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-e3459bc746\" (UID: \"77a595618055339b6d4c97e32de6dbf0\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3459bc746" May 8 00:28:48.664507 kubelet[3704]: I0508 00:28:48.664484 3704 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.664800 kubelet[3704]: E0508 00:28:48.664776 3704 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.129.25:6443/api/v1/nodes\": dial tcp 147.28.129.25:6443: connect: connection refused" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:48.735812 containerd[2726]: time="2025-05-08T00:28:48.735782237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-e3459bc746,Uid:cdf6eaf76afebb3de97fccc152f77faa,Namespace:kube-system,Attempt:0,}" May 8 00:28:48.737124 containerd[2726]: time="2025-05-08T00:28:48.737098886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-e3459bc746,Uid:d2bab5ffa98678703f803584f727053c,Namespace:kube-system,Attempt:0,}" May 8 00:28:48.739600 containerd[2726]: time="2025-05-08T00:28:48.739579494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-e3459bc746,Uid:77a595618055339b6d4c97e32de6dbf0,Namespace:kube-system,Attempt:0,}" May 8 00:28:48.895912 kubelet[3704]: E0508 00:28:48.895872 3704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.129.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3459bc746?timeout=10s\": dial tcp 147.28.129.25:6443: connect: connection refused" interval="800ms" May 8 00:28:49.022332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280623233.mount: Deactivated successfully. May 8 00:28:49.023091 containerd[2726]: time="2025-05-08T00:28:49.023065817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:28:49.023596 containerd[2726]: time="2025-05-08T00:28:49.023562314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:28:49.023730 containerd[2726]: time="2025-05-08T00:28:49.023698705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 8 00:28:49.023859 containerd[2726]: time="2025-05-08T00:28:49.023841838Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:28:49.026100 containerd[2726]: time="2025-05-08T00:28:49.026061195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:28:49.026147 containerd[2726]: time="2025-05-08T00:28:49.026110741Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:28:49.029664 containerd[2726]: time="2025-05-08T00:28:49.029638798Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:28:49.031066 containerd[2726]: time="2025-05-08T00:28:49.031044755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 293.888245ms" May 8 00:28:49.031759 containerd[2726]: time="2025-05-08T00:28:49.031737321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 295.886613ms" May 8 00:28:49.032141 containerd[2726]: time="2025-05-08T00:28:49.032117174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:28:49.034452 containerd[2726]: time="2025-05-08T00:28:49.034426049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 294.793913ms" May 8 00:28:49.067065 kubelet[3704]: I0508 00:28:49.067037 3704 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:49.067363 kubelet[3704]: E0508 00:28:49.067335 3704 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.129.25:6443/api/v1/nodes\": dial tcp 147.28.129.25:6443: connect: connection refused" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:49.141854 containerd[2726]: time="2025-05-08T00:28:49.141778710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:28:49.141854 containerd[2726]: time="2025-05-08T00:28:49.141845649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:28:49.141903 containerd[2726]: time="2025-05-08T00:28:49.141856779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:28:49.141950 containerd[2726]: time="2025-05-08T00:28:49.141931098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:28:49.143062 containerd[2726]: time="2025-05-08T00:28:49.142976471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:28:49.143105 containerd[2726]: time="2025-05-08T00:28:49.143076839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:28:49.143128 containerd[2726]: time="2025-05-08T00:28:49.143098820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:28:49.143415 containerd[2726]: time="2025-05-08T00:28:49.143355605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:28:49.143436 containerd[2726]: time="2025-05-08T00:28:49.143415683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:28:49.143436 containerd[2726]: time="2025-05-08T00:28:49.143428249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:28:49.143524 containerd[2726]: time="2025-05-08T00:28:49.143505280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:28:49.143574 containerd[2726]: time="2025-05-08T00:28:49.143551316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:28:49.168178 systemd[1]: Started cri-containerd-6924dc096b54f97cb95e460fee48e756848afbf3187d523e19c81ecd51a8bb01.scope - libcontainer container 6924dc096b54f97cb95e460fee48e756848afbf3187d523e19c81ecd51a8bb01. May 8 00:28:49.169447 systemd[1]: Started cri-containerd-875bccbe13d281c45af3c35984927ae9e41f6651adbeb4f67643fa802a762757.scope - libcontainer container 875bccbe13d281c45af3c35984927ae9e41f6651adbeb4f67643fa802a762757. May 8 00:28:49.170691 systemd[1]: Started cri-containerd-db82cc8afc9dcc0d08524d84c4ef02b613818a93349c5e1f0e8001f91924732c.scope - libcontainer container db82cc8afc9dcc0d08524d84c4ef02b613818a93349c5e1f0e8001f91924732c. May 8 00:28:49.191368 containerd[2726]: time="2025-05-08T00:28:49.191339055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-e3459bc746,Uid:77a595618055339b6d4c97e32de6dbf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6924dc096b54f97cb95e460fee48e756848afbf3187d523e19c81ecd51a8bb01\"" May 8 00:28:49.192568 containerd[2726]: time="2025-05-08T00:28:49.192532307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-e3459bc746,Uid:d2bab5ffa98678703f803584f727053c,Namespace:kube-system,Attempt:0,} returns sandbox id \"875bccbe13d281c45af3c35984927ae9e41f6651adbeb4f67643fa802a762757\"" May 8 00:28:49.193734 containerd[2726]: time="2025-05-08T00:28:49.193716265Z" level=info msg="CreateContainer within sandbox \"6924dc096b54f97cb95e460fee48e756848afbf3187d523e19c81ecd51a8bb01\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:28:49.193926 containerd[2726]: time="2025-05-08T00:28:49.193900048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-e3459bc746,Uid:cdf6eaf76afebb3de97fccc152f77faa,Namespace:kube-system,Attempt:0,} returns sandbox id \"db82cc8afc9dcc0d08524d84c4ef02b613818a93349c5e1f0e8001f91924732c\"" May 8 00:28:49.194007 containerd[2726]: time="2025-05-08T00:28:49.193987172Z" level=info msg="CreateContainer within sandbox \"875bccbe13d281c45af3c35984927ae9e41f6651adbeb4f67643fa802a762757\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:28:49.195485 containerd[2726]: time="2025-05-08T00:28:49.195462302Z" level=info msg="CreateContainer within sandbox \"db82cc8afc9dcc0d08524d84c4ef02b613818a93349c5e1f0e8001f91924732c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:28:49.203216 containerd[2726]: time="2025-05-08T00:28:49.203192113Z" level=info msg="CreateContainer within sandbox \"6924dc096b54f97cb95e460fee48e756848afbf3187d523e19c81ecd51a8bb01\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72b3d452a95998c95ef42836624301896a338d8c88756844fd64fb2bb46af58e\"" May 8 00:28:49.203823 containerd[2726]: time="2025-05-08T00:28:49.203800229Z" level=info msg="StartContainer for \"72b3d452a95998c95ef42836624301896a338d8c88756844fd64fb2bb46af58e\"" May 8 00:28:49.204114 containerd[2726]: time="2025-05-08T00:28:49.204070577Z" level=info msg="CreateContainer within sandbox \"db82cc8afc9dcc0d08524d84c4ef02b613818a93349c5e1f0e8001f91924732c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f1b5483f9221890cd3edbc2587753ddfa060d9525b656e03ed0233954029169\"" May 8 00:28:49.204198 containerd[2726]: time="2025-05-08T00:28:49.204170347Z" level=info msg="CreateContainer within sandbox \"875bccbe13d281c45af3c35984927ae9e41f6651adbeb4f67643fa802a762757\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cfb884b3216b707d3a5b07d12b2a5c97731d2c45d16de956ccaaa5d2a2cd5c65\"" May 8 00:28:49.204607 containerd[2726]: time="2025-05-08T00:28:49.204582592Z" level=info msg="StartContainer for \"cfb884b3216b707d3a5b07d12b2a5c97731d2c45d16de956ccaaa5d2a2cd5c65\"" May 8 00:28:49.204758 containerd[2726]: time="2025-05-08T00:28:49.204740485Z" level=info msg="StartContainer for \"5f1b5483f9221890cd3edbc2587753ddfa060d9525b656e03ed0233954029169\"" May 8 00:28:49.246179 systemd[1]: Started cri-containerd-5f1b5483f9221890cd3edbc2587753ddfa060d9525b656e03ed0233954029169.scope - libcontainer container 5f1b5483f9221890cd3edbc2587753ddfa060d9525b656e03ed0233954029169. May 8 00:28:49.247344 systemd[1]: Started cri-containerd-72b3d452a95998c95ef42836624301896a338d8c88756844fd64fb2bb46af58e.scope - libcontainer container 72b3d452a95998c95ef42836624301896a338d8c88756844fd64fb2bb46af58e. May 8 00:28:49.248447 systemd[1]: Started cri-containerd-cfb884b3216b707d3a5b07d12b2a5c97731d2c45d16de956ccaaa5d2a2cd5c65.scope - libcontainer container cfb884b3216b707d3a5b07d12b2a5c97731d2c45d16de956ccaaa5d2a2cd5c65. May 8 00:28:49.256300 kubelet[3704]: W0508 00:28:49.256246 3704 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 8 00:28:49.256367 kubelet[3704]: E0508 00:28:49.256305 3704 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 8 00:28:49.271353 containerd[2726]: time="2025-05-08T00:28:49.271319436Z" level=info msg="StartContainer for \"5f1b5483f9221890cd3edbc2587753ddfa060d9525b656e03ed0233954029169\" returns successfully" May 8 00:28:49.271961 containerd[2726]: time="2025-05-08T00:28:49.271933654Z" level=info msg="StartContainer for \"72b3d452a95998c95ef42836624301896a338d8c88756844fd64fb2bb46af58e\" returns successfully" May 8 00:28:49.273944 containerd[2726]: time="2025-05-08T00:28:49.273923353Z" level=info msg="StartContainer for \"cfb884b3216b707d3a5b07d12b2a5c97731d2c45d16de956ccaaa5d2a2cd5c65\" returns successfully" May 8 00:28:49.314107 kubelet[3704]: E0508 00:28:49.314037 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:49.314697 kubelet[3704]: E0508 00:28:49.314677 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:49.315824 kubelet[3704]: E0508 00:28:49.315807 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:49.869704 kubelet[3704]: I0508 00:28:49.869679 3704 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:50.317737 kubelet[3704]: E0508 00:28:50.317711 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:50.318074 kubelet[3704]: E0508 00:28:50.317811 3704 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:51.098031 kubelet[3704]: E0508 00:28:51.097862 3704 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-e3459bc746\" not found" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:51.199994 kubelet[3704]: I0508 00:28:51.199960 3704 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:51.290531 kubelet[3704]: I0508 00:28:51.290495 3704 apiserver.go:52] "Watching apiserver" May 8 00:28:51.294479 kubelet[3704]: I0508 00:28:51.294450 3704 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:28:51.294479 kubelet[3704]: I0508 00:28:51.294469 3704 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.299724 kubelet[3704]: E0508 00:28:51.299705 3704 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.299724 kubelet[3704]: I0508 00:28:51.299723 3704 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.301172 kubelet[3704]: E0508 00:28:51.301148 3704 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.301207 kubelet[3704]: I0508 00:28:51.301175 3704 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.302456 kubelet[3704]: E0508 00:28:51.302441 3704 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-n-e3459bc746\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.828458 kubelet[3704]: I0508 00:28:51.828441 3704 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:51.830037 kubelet[3704]: E0508 00:28:51.830010 3704 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.161777 systemd[1]: Reload requested from client PID 4139 ('systemctl') (unit session-9.scope)... May 8 00:28:53.161788 systemd[1]: Reloading... May 8 00:28:53.235063 zram_generator::config[4192]: No configuration found. May 8 00:28:53.323935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:28:53.426157 systemd[1]: Reloading finished in 264 ms. May 8 00:28:53.449656 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:53.469467 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:28:53.469703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:53.469743 systemd[1]: kubelet.service: Consumed 977ms CPU time, 147.6M memory peak. May 8 00:28:53.479542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:28:53.586111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:28:53.589454 (kubelet)[4251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:28:53.620040 kubelet[4251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:28:53.620040 kubelet[4251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:28:53.620040 kubelet[4251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:28:53.620371 kubelet[4251]: I0508 00:28:53.620107 4251 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:28:53.625222 kubelet[4251]: I0508 00:28:53.625198 4251 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:28:53.625222 kubelet[4251]: I0508 00:28:53.625219 4251 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:28:53.625465 kubelet[4251]: I0508 00:28:53.625456 4251 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:28:53.626619 kubelet[4251]: I0508 00:28:53.626607 4251 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:28:53.628868 kubelet[4251]: I0508 00:28:53.628845 4251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:28:53.632331 kubelet[4251]: E0508 00:28:53.632303 4251 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:28:53.632374 kubelet[4251]: I0508 00:28:53.632334 4251 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:28:53.650475 kubelet[4251]: I0508 00:28:53.650454 4251 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:28:53.650660 kubelet[4251]: I0508 00:28:53.650628 4251 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:28:53.650807 kubelet[4251]: I0508 00:28:53.650655 4251 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-e3459bc746","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:28:53.650872 kubelet[4251]: I0508 00:28:53.650816 4251 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:28:53.650872 kubelet[4251]: I0508 00:28:53.650825 4251 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:28:53.650912 kubelet[4251]: I0508 00:28:53.650885 4251 state_mem.go:36] "Initialized new in-memory state store" May 8 00:28:53.651209 kubelet[4251]: I0508 00:28:53.651196 4251 kubelet.go:446] "Attempting to sync node with API server" May 8 00:28:53.651241 kubelet[4251]: I0508 00:28:53.651211 4251 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:28:53.651241 kubelet[4251]: I0508 00:28:53.651228 4251 kubelet.go:352] "Adding apiserver pod source" May 8 00:28:53.651241 kubelet[4251]: I0508 00:28:53.651237 4251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:28:53.651810 kubelet[4251]: I0508 00:28:53.651790 4251 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:28:53.653402 kubelet[4251]: I0508 00:28:53.653376 4251 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:28:53.653826 kubelet[4251]: I0508 00:28:53.653815 4251 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:28:53.653848 kubelet[4251]: I0508 00:28:53.653840 4251 server.go:1287] "Started kubelet" May 8 00:28:53.653980 kubelet[4251]: I0508 00:28:53.653952 4251 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:28:53.654063 kubelet[4251]: I0508 00:28:53.654011 4251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:28:53.654272 kubelet[4251]: I0508 00:28:53.654257 4251 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:28:53.655003 kubelet[4251]: I0508 00:28:53.654986 4251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:28:53.655003 kubelet[4251]: I0508 00:28:53.654991 4251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:28:53.655114 kubelet[4251]: E0508 00:28:53.655099 4251 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-e3459bc746\" not found" May 8 00:28:53.655114 kubelet[4251]: I0508 00:28:53.655106 4251 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:28:53.655197 kubelet[4251]: I0508 00:28:53.655133 4251 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:28:53.655253 kubelet[4251]: I0508 00:28:53.655238 4251 reconciler.go:26] "Reconciler: start to sync state" May 8 00:28:53.655476 kubelet[4251]: I0508 00:28:53.655458 4251 factory.go:221] Registration of the systemd container factory successfully May 8 00:28:53.655500 kubelet[4251]: E0508 00:28:53.655459 4251 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:28:53.655571 kubelet[4251]: I0508 00:28:53.655554 4251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:28:53.655958 kubelet[4251]: I0508 00:28:53.655942 4251 server.go:490] "Adding debug handlers to kubelet server" May 8 00:28:53.656260 kubelet[4251]: I0508 00:28:53.656246 4251 factory.go:221] Registration of the containerd container factory successfully May 8 00:28:53.662779 kubelet[4251]: I0508 00:28:53.662658 4251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:28:53.663806 kubelet[4251]: I0508 00:28:53.663754 4251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:28:53.663806 kubelet[4251]: I0508 00:28:53.663780 4251 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:28:53.663806 kubelet[4251]: I0508 00:28:53.663797 4251 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:28:53.663806 kubelet[4251]: I0508 00:28:53.663805 4251 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:28:53.663951 kubelet[4251]: E0508 00:28:53.663846 4251 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:28:53.686813 kubelet[4251]: I0508 00:28:53.686740 4251 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:28:53.686813 kubelet[4251]: I0508 00:28:53.686756 4251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:28:53.686813 kubelet[4251]: I0508 00:28:53.686772 4251 state_mem.go:36] "Initialized new in-memory state store" May 8 00:28:53.686929 kubelet[4251]: I0508 00:28:53.686912 4251 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:28:53.686951 kubelet[4251]: I0508 00:28:53.686924 4251 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:28:53.686951 kubelet[4251]: I0508 00:28:53.686943 4251 policy_none.go:49] "None policy: Start" May 8 00:28:53.686994 kubelet[4251]: I0508 00:28:53.686951 4251 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:28:53.686994 kubelet[4251]: I0508 00:28:53.686960 4251 state_mem.go:35] "Initializing new in-memory state store" May 8 00:28:53.687071 kubelet[4251]: I0508 00:28:53.687047 4251 state_mem.go:75] "Updated machine memory state" May 8 00:28:53.689956 kubelet[4251]: I0508 00:28:53.689943 4251 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:28:53.690107 kubelet[4251]: I0508 00:28:53.690096 4251 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:28:53.690151 kubelet[4251]: I0508 00:28:53.690107 4251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:28:53.690464 kubelet[4251]: I0508 00:28:53.690265 4251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:28:53.690706 kubelet[4251]: E0508 00:28:53.690691 4251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:28:53.765208 kubelet[4251]: I0508 00:28:53.765183 4251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.765343 kubelet[4251]: I0508 00:28:53.765328 4251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.765402 kubelet[4251]: I0508 00:28:53.765384 4251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.767917 kubelet[4251]: W0508 00:28:53.767897 4251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:28:53.768212 kubelet[4251]: W0508 00:28:53.768199 4251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:28:53.768363 kubelet[4251]: W0508 00:28:53.768345 4251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:28:53.792877 kubelet[4251]: I0508 00:28:53.792859 4251 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:53.797073 kubelet[4251]: I0508 00:28:53.797025 4251 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:53.797160 kubelet[4251]: I0508 00:28:53.797104 4251 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-n-e3459bc746" May 8 00:28:53.956780 kubelet[4251]: I0508 00:28:53.956685 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.956780 kubelet[4251]: I0508 00:28:53.956730 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.956780 kubelet[4251]: I0508 00:28:53.956766 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77a595618055339b6d4c97e32de6dbf0-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-e3459bc746\" (UID: \"77a595618055339b6d4c97e32de6dbf0\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.957159 kubelet[4251]: I0508 00:28:53.956795 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdf6eaf76afebb3de97fccc152f77faa-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" (UID: \"cdf6eaf76afebb3de97fccc152f77faa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.957159 kubelet[4251]: I0508 00:28:53.956817 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdf6eaf76afebb3de97fccc152f77faa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" (UID: \"cdf6eaf76afebb3de97fccc152f77faa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.957159 kubelet[4251]: I0508 00:28:53.956834 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.957159 kubelet[4251]: I0508 00:28:53.956851 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.957159 kubelet[4251]: I0508 00:28:53.956869 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdf6eaf76afebb3de97fccc152f77faa-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" (UID: \"cdf6eaf76afebb3de97fccc152f77faa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:53.957313 kubelet[4251]: I0508 00:28:53.956886 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2bab5ffa98678703f803584f727053c-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" (UID: \"d2bab5ffa98678703f803584f727053c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:54.652206 kubelet[4251]: I0508 00:28:54.652179 4251 apiserver.go:52] "Watching apiserver" May 8 00:28:54.655389 kubelet[4251]: I0508 00:28:54.655370 4251 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:28:54.669548 kubelet[4251]: I0508 00:28:54.669530 4251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:54.669648 kubelet[4251]: I0508 00:28:54.669553 4251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:54.673079 kubelet[4251]: W0508 00:28:54.673056 4251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:28:54.673145 kubelet[4251]: E0508 00:28:54.673106 4251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-n-e3459bc746\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" May 8 00:28:54.673293 kubelet[4251]: W0508 00:28:54.673277 4251 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:28:54.673326 kubelet[4251]: E0508 00:28:54.673317 4251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-n-e3459bc746\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" May 8 00:28:54.684232 kubelet[4251]: I0508 00:28:54.684188 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3459bc746" podStartSLOduration=1.684176626 podStartE2EDuration="1.684176626s" podCreationTimestamp="2025-05-08 00:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:28:54.684131209 +0000 UTC m=+1.091899801" watchObservedRunningTime="2025-05-08 00:28:54.684176626 +0000 UTC m=+1.091945218" May 8 00:28:54.694818 kubelet[4251]: I0508 00:28:54.694781 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3459bc746" podStartSLOduration=1.6947679519999999 podStartE2EDuration="1.694767952s" podCreationTimestamp="2025-05-08 00:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:28:54.689452996 +0000 UTC m=+1.097221548" watchObservedRunningTime="2025-05-08 00:28:54.694767952 +0000 UTC m=+1.102536504" May 8 00:28:54.700161 kubelet[4251]: I0508 00:28:54.700129 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3459bc746" podStartSLOduration=1.70011766 podStartE2EDuration="1.70011766s" podCreationTimestamp="2025-05-08 00:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:28:54.694828907 +0000 UTC m=+1.102597499" watchObservedRunningTime="2025-05-08 00:28:54.70011766 +0000 UTC m=+1.107886251" May 8 00:28:58.017496 sudo[2986]: pam_unix(sudo:session): session closed for user root May 8 00:28:58.079087 sshd[2985]: Connection closed by 139.178.68.195 port 42090 May 8 00:28:58.079459 sshd-session[2983]: pam_unix(sshd:session): session closed for user core May 8 00:28:58.082381 systemd[1]: sshd@7-147.28.129.25:22-139.178.68.195:42090.service: Deactivated successfully. May 8 00:28:58.084261 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:28:58.084474 systemd[1]: session-9.scope: Consumed 7.055s CPU time, 248.5M memory peak. May 8 00:28:58.085525 systemd-logind[2709]: Session 9 logged out. Waiting for processes to exit. May 8 00:28:58.086092 systemd-logind[2709]: Removed session 9. May 8 00:28:59.129027 kubelet[4251]: I0508 00:28:59.128994 4251 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:28:59.129483 kubelet[4251]: I0508 00:28:59.129448 4251 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:28:59.129511 containerd[2726]: time="2025-05-08T00:28:59.129291457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:29:00.105780 systemd[1]: Created slice kubepods-besteffort-pod9a7b0788_54fb_4350_84ef_e7269806a7e6.slice - libcontainer container kubepods-besteffort-pod9a7b0788_54fb_4350_84ef_e7269806a7e6.slice. May 8 00:29:00.196151 kubelet[4251]: I0508 00:29:00.196120 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a7b0788-54fb-4350-84ef-e7269806a7e6-xtables-lock\") pod \"kube-proxy-f56jc\" (UID: \"9a7b0788-54fb-4350-84ef-e7269806a7e6\") " pod="kube-system/kube-proxy-f56jc" May 8 00:29:00.196151 kubelet[4251]: I0508 00:29:00.196153 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdg9r\" (UniqueName: \"kubernetes.io/projected/9a7b0788-54fb-4350-84ef-e7269806a7e6-kube-api-access-rdg9r\") pod \"kube-proxy-f56jc\" (UID: \"9a7b0788-54fb-4350-84ef-e7269806a7e6\") " pod="kube-system/kube-proxy-f56jc" May 8 00:29:00.196478 kubelet[4251]: I0508 00:29:00.196173 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a7b0788-54fb-4350-84ef-e7269806a7e6-lib-modules\") pod \"kube-proxy-f56jc\" (UID: \"9a7b0788-54fb-4350-84ef-e7269806a7e6\") " pod="kube-system/kube-proxy-f56jc" May 8 00:29:00.196478 kubelet[4251]: I0508 00:29:00.196191 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a7b0788-54fb-4350-84ef-e7269806a7e6-kube-proxy\") pod \"kube-proxy-f56jc\" (UID: \"9a7b0788-54fb-4350-84ef-e7269806a7e6\") " pod="kube-system/kube-proxy-f56jc" May 8 00:29:00.226951 systemd[1]: Created slice kubepods-besteffort-poda30a69f6_02d5_4a45_bfa6_a2657af877a2.slice - libcontainer container kubepods-besteffort-poda30a69f6_02d5_4a45_bfa6_a2657af877a2.slice. May 8 00:29:00.296545 kubelet[4251]: I0508 00:29:00.296509 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zcl\" (UniqueName: \"kubernetes.io/projected/a30a69f6-02d5-4a45-bfa6-a2657af877a2-kube-api-access-x4zcl\") pod \"tigera-operator-789496d6f5-wcf9q\" (UID: \"a30a69f6-02d5-4a45-bfa6-a2657af877a2\") " pod="tigera-operator/tigera-operator-789496d6f5-wcf9q" May 8 00:29:00.296616 kubelet[4251]: I0508 00:29:00.296558 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a30a69f6-02d5-4a45-bfa6-a2657af877a2-var-lib-calico\") pod \"tigera-operator-789496d6f5-wcf9q\" (UID: \"a30a69f6-02d5-4a45-bfa6-a2657af877a2\") " pod="tigera-operator/tigera-operator-789496d6f5-wcf9q" May 8 00:29:00.421079 containerd[2726]: time="2025-05-08T00:29:00.420971716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f56jc,Uid:9a7b0788-54fb-4350-84ef-e7269806a7e6,Namespace:kube-system,Attempt:0,}" May 8 00:29:00.433054 containerd[2726]: time="2025-05-08T00:29:00.432997564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:00.433096 containerd[2726]: time="2025-05-08T00:29:00.433055887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:00.433096 containerd[2726]: time="2025-05-08T00:29:00.433068280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:00.433152 containerd[2726]: time="2025-05-08T00:29:00.433136477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:00.453181 systemd[1]: Started cri-containerd-83fac2852a56959ec6892a163663c051506a7434dae89a204fe19640a4d1d833.scope - libcontainer container 83fac2852a56959ec6892a163663c051506a7434dae89a204fe19640a4d1d833. May 8 00:29:00.468453 containerd[2726]: time="2025-05-08T00:29:00.468423752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f56jc,Uid:9a7b0788-54fb-4350-84ef-e7269806a7e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"83fac2852a56959ec6892a163663c051506a7434dae89a204fe19640a4d1d833\"" May 8 00:29:00.470431 containerd[2726]: time="2025-05-08T00:29:00.470409797Z" level=info msg="CreateContainer within sandbox \"83fac2852a56959ec6892a163663c051506a7434dae89a204fe19640a4d1d833\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:29:00.480704 containerd[2726]: time="2025-05-08T00:29:00.480669263Z" level=info msg="CreateContainer within sandbox \"83fac2852a56959ec6892a163663c051506a7434dae89a204fe19640a4d1d833\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7fe933a46fad47c2213e8b64c2f0bb8b75269068b62247467b94c70b3b8ca59e\"" May 8 00:29:00.481233 containerd[2726]: time="2025-05-08T00:29:00.481213205Z" level=info msg="StartContainer for \"7fe933a46fad47c2213e8b64c2f0bb8b75269068b62247467b94c70b3b8ca59e\"" May 8 00:29:00.508232 systemd[1]: Started cri-containerd-7fe933a46fad47c2213e8b64c2f0bb8b75269068b62247467b94c70b3b8ca59e.scope - libcontainer container 7fe933a46fad47c2213e8b64c2f0bb8b75269068b62247467b94c70b3b8ca59e. May 8 00:29:00.528558 containerd[2726]: time="2025-05-08T00:29:00.528525327Z" level=info msg="StartContainer for \"7fe933a46fad47c2213e8b64c2f0bb8b75269068b62247467b94c70b3b8ca59e\" returns successfully" May 8 00:29:00.529065 containerd[2726]: time="2025-05-08T00:29:00.529037009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-wcf9q,Uid:a30a69f6-02d5-4a45-bfa6-a2657af877a2,Namespace:tigera-operator,Attempt:0,}" May 8 00:29:00.541543 containerd[2726]: time="2025-05-08T00:29:00.541009171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:00.541543 containerd[2726]: time="2025-05-08T00:29:00.541401687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:00.541543 containerd[2726]: time="2025-05-08T00:29:00.541419276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:00.541543 containerd[2726]: time="2025-05-08T00:29:00.541505622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:00.566240 systemd[1]: Started cri-containerd-9bf34e94db09878b70e7095d5d81dff1418d66778157f932bb54c2a683a185f4.scope - libcontainer container 9bf34e94db09878b70e7095d5d81dff1418d66778157f932bb54c2a683a185f4. May 8 00:29:00.589302 containerd[2726]: time="2025-05-08T00:29:00.589268784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-wcf9q,Uid:a30a69f6-02d5-4a45-bfa6-a2657af877a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9bf34e94db09878b70e7095d5d81dff1418d66778157f932bb54c2a683a185f4\"" May 8 00:29:00.590440 containerd[2726]: time="2025-05-08T00:29:00.590420109Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:29:00.685768 kubelet[4251]: I0508 00:29:00.685684 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f56jc" podStartSLOduration=0.685669646 podStartE2EDuration="685.669646ms" podCreationTimestamp="2025-05-08 00:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:29:00.685595612 +0000 UTC m=+7.093364204" watchObservedRunningTime="2025-05-08 00:29:00.685669646 +0000 UTC m=+7.093438238" May 8 00:29:01.542260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930196924.mount: Deactivated successfully. May 8 00:29:03.133083 update_engine[2720]: I20250508 00:29:03.132844 2720 update_attempter.cc:509] Updating boot flags... May 8 00:29:03.162061 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (4795) May 8 00:29:03.192063 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (4799) May 8 00:29:04.963149 containerd[2726]: time="2025-05-08T00:29:04.963057766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 8 00:29:04.963149 containerd[2726]: time="2025-05-08T00:29:04.963068727Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:04.964059 containerd[2726]: time="2025-05-08T00:29:04.964030385Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:04.966059 containerd[2726]: time="2025-05-08T00:29:04.966033987Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:04.966888 containerd[2726]: time="2025-05-08T00:29:04.966863831Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 4.376415533s" May 8 00:29:04.966915 containerd[2726]: time="2025-05-08T00:29:04.966894914Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 8 00:29:04.968465 containerd[2726]: time="2025-05-08T00:29:04.968443991Z" level=info msg="CreateContainer within sandbox \"9bf34e94db09878b70e7095d5d81dff1418d66778157f932bb54c2a683a185f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:29:04.973271 containerd[2726]: time="2025-05-08T00:29:04.973246677Z" level=info msg="CreateContainer within sandbox \"9bf34e94db09878b70e7095d5d81dff1418d66778157f932bb54c2a683a185f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c482fc84818ca9912d428ade4454e47b089ab6a11f6746ab21ad12249e8121ad\"" May 8 00:29:04.973634 containerd[2726]: time="2025-05-08T00:29:04.973609513Z" level=info msg="StartContainer for \"c482fc84818ca9912d428ade4454e47b089ab6a11f6746ab21ad12249e8121ad\"" May 8 00:29:05.003255 systemd[1]: Started cri-containerd-c482fc84818ca9912d428ade4454e47b089ab6a11f6746ab21ad12249e8121ad.scope - libcontainer container c482fc84818ca9912d428ade4454e47b089ab6a11f6746ab21ad12249e8121ad. May 8 00:29:05.019813 containerd[2726]: time="2025-05-08T00:29:05.019786079Z" level=info msg="StartContainer for \"c482fc84818ca9912d428ade4454e47b089ab6a11f6746ab21ad12249e8121ad\" returns successfully" May 8 00:29:05.692716 kubelet[4251]: I0508 00:29:05.692668 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-wcf9q" podStartSLOduration=1.315264555 podStartE2EDuration="5.692651612s" podCreationTimestamp="2025-05-08 00:29:00 +0000 UTC" firstStartedPulling="2025-05-08 00:29:00.590087676 +0000 UTC m=+6.997856268" lastFinishedPulling="2025-05-08 00:29:04.967474733 +0000 UTC m=+11.375243325" observedRunningTime="2025-05-08 00:29:05.692613088 +0000 UTC m=+12.100381680" watchObservedRunningTime="2025-05-08 00:29:05.692651612 +0000 UTC m=+12.100420204" May 8 00:29:09.020368 systemd[1]: Created slice kubepods-besteffort-pode3ffa4da_f787_4841_bb42_177e5a214190.slice - libcontainer container kubepods-besteffort-pode3ffa4da_f787_4841_bb42_177e5a214190.slice. May 8 00:29:09.039186 systemd[1]: Created slice kubepods-besteffort-podbdd4679d_f51b_4b1a_955a_d1b148438dee.slice - libcontainer container kubepods-besteffort-podbdd4679d_f51b_4b1a_955a_d1b148438dee.slice. May 8 00:29:09.044478 kubelet[4251]: I0508 00:29:09.044447 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfjmg\" (UniqueName: \"kubernetes.io/projected/e3ffa4da-f787-4841-bb42-177e5a214190-kube-api-access-hfjmg\") pod \"calico-typha-858bcc7475-v6pfz\" (UID: \"e3ffa4da-f787-4841-bb42-177e5a214190\") " pod="calico-system/calico-typha-858bcc7475-v6pfz" May 8 00:29:09.044742 kubelet[4251]: I0508 00:29:09.044483 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-policysync\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044742 kubelet[4251]: I0508 00:29:09.044500 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-cni-bin-dir\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044742 kubelet[4251]: I0508 00:29:09.044515 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-cni-net-dir\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044742 kubelet[4251]: I0508 00:29:09.044533 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-xtables-lock\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044742 kubelet[4251]: I0508 00:29:09.044548 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-var-lib-calico\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044851 kubelet[4251]: I0508 00:29:09.044564 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-lib-modules\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044851 kubelet[4251]: I0508 00:29:09.044579 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl747\" (UniqueName: \"kubernetes.io/projected/bdd4679d-f51b-4b1a-955a-d1b148438dee-kube-api-access-gl747\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044851 kubelet[4251]: I0508 00:29:09.044610 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e3ffa4da-f787-4841-bb42-177e5a214190-typha-certs\") pod \"calico-typha-858bcc7475-v6pfz\" (UID: \"e3ffa4da-f787-4841-bb42-177e5a214190\") " pod="calico-system/calico-typha-858bcc7475-v6pfz" May 8 00:29:09.044851 kubelet[4251]: I0508 00:29:09.044628 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bdd4679d-f51b-4b1a-955a-d1b148438dee-node-certs\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.044851 kubelet[4251]: I0508 00:29:09.044644 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd4679d-f51b-4b1a-955a-d1b148438dee-tigera-ca-bundle\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.045001 kubelet[4251]: I0508 00:29:09.044659 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-cni-log-dir\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.045001 kubelet[4251]: I0508 00:29:09.044687 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ffa4da-f787-4841-bb42-177e5a214190-tigera-ca-bundle\") pod \"calico-typha-858bcc7475-v6pfz\" (UID: \"e3ffa4da-f787-4841-bb42-177e5a214190\") " pod="calico-system/calico-typha-858bcc7475-v6pfz" May 8 00:29:09.045001 kubelet[4251]: I0508 00:29:09.044702 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-var-run-calico\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.045001 kubelet[4251]: I0508 00:29:09.044716 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bdd4679d-f51b-4b1a-955a-d1b148438dee-flexvol-driver-host\") pod \"calico-node-ncdtd\" (UID: \"bdd4679d-f51b-4b1a-955a-d1b148438dee\") " pod="calico-system/calico-node-ncdtd" May 8 00:29:09.142260 kubelet[4251]: E0508 00:29:09.142224 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qllhv" podUID="e48e3e91-8128-422e-a541-9325080112a1" May 8 00:29:09.149003 kubelet[4251]: E0508 00:29:09.148972 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.149031 kubelet[4251]: W0508 00:29:09.149002 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.149031 kubelet[4251]: E0508 00:29:09.149022 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.149422 kubelet[4251]: E0508 00:29:09.149404 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.149449 kubelet[4251]: W0508 00:29:09.149420 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.149449 kubelet[4251]: E0508 00:29:09.149436 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.149690 kubelet[4251]: E0508 00:29:09.149674 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.149717 kubelet[4251]: W0508 00:29:09.149688 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.149717 kubelet[4251]: E0508 00:29:09.149701 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.150105 kubelet[4251]: E0508 00:29:09.150087 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.150133 kubelet[4251]: W0508 00:29:09.150104 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.150133 kubelet[4251]: E0508 00:29:09.150117 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.155867 kubelet[4251]: E0508 00:29:09.155851 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.155893 kubelet[4251]: W0508 00:29:09.155865 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.155893 kubelet[4251]: E0508 00:29:09.155880 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.156224 kubelet[4251]: E0508 00:29:09.156206 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.156249 kubelet[4251]: W0508 00:29:09.156222 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.156249 kubelet[4251]: E0508 00:29:09.156240 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.241724 kubelet[4251]: E0508 00:29:09.241691 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.241724 kubelet[4251]: W0508 00:29:09.241710 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.241724 kubelet[4251]: E0508 00:29:09.241731 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.242001 kubelet[4251]: E0508 00:29:09.241977 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.242041 kubelet[4251]: W0508 00:29:09.241992 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.242041 kubelet[4251]: E0508 00:29:09.242031 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.242267 kubelet[4251]: E0508 00:29:09.242256 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.242267 kubelet[4251]: W0508 00:29:09.242265 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.242313 kubelet[4251]: E0508 00:29:09.242273 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.242483 kubelet[4251]: E0508 00:29:09.242474 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.242483 kubelet[4251]: W0508 00:29:09.242481 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.242524 kubelet[4251]: E0508 00:29:09.242507 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.242738 kubelet[4251]: E0508 00:29:09.242727 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.242738 kubelet[4251]: W0508 00:29:09.242735 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.242781 kubelet[4251]: E0508 00:29:09.242744 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.242958 kubelet[4251]: E0508 00:29:09.242948 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.242958 kubelet[4251]: W0508 00:29:09.242955 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.243001 kubelet[4251]: E0508 00:29:09.242963 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.243153 kubelet[4251]: E0508 00:29:09.243142 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.243179 kubelet[4251]: W0508 00:29:09.243165 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.243179 kubelet[4251]: E0508 00:29:09.243173 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.243382 kubelet[4251]: E0508 00:29:09.243371 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.243382 kubelet[4251]: W0508 00:29:09.243378 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.243424 kubelet[4251]: E0508 00:29:09.243385 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.243604 kubelet[4251]: E0508 00:29:09.243594 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.243604 kubelet[4251]: W0508 00:29:09.243602 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.243645 kubelet[4251]: E0508 00:29:09.243610 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.243813 kubelet[4251]: E0508 00:29:09.243803 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.243813 kubelet[4251]: W0508 00:29:09.243810 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.243853 kubelet[4251]: E0508 00:29:09.243817 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244008 kubelet[4251]: E0508 00:29:09.244001 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244030 kubelet[4251]: W0508 00:29:09.244008 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244030 kubelet[4251]: E0508 00:29:09.244015 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244159 kubelet[4251]: E0508 00:29:09.244152 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244184 kubelet[4251]: W0508 00:29:09.244159 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244184 kubelet[4251]: E0508 00:29:09.244166 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244307 kubelet[4251]: E0508 00:29:09.244299 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244326 kubelet[4251]: W0508 00:29:09.244307 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244326 kubelet[4251]: E0508 00:29:09.244314 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244442 kubelet[4251]: E0508 00:29:09.244435 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244462 kubelet[4251]: W0508 00:29:09.244442 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244462 kubelet[4251]: E0508 00:29:09.244449 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244572 kubelet[4251]: E0508 00:29:09.244565 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244592 kubelet[4251]: W0508 00:29:09.244572 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244592 kubelet[4251]: E0508 00:29:09.244578 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244710 kubelet[4251]: E0508 00:29:09.244703 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244730 kubelet[4251]: W0508 00:29:09.244710 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244730 kubelet[4251]: E0508 00:29:09.244716 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.244863 kubelet[4251]: E0508 00:29:09.244855 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.244885 kubelet[4251]: W0508 00:29:09.244863 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.244885 kubelet[4251]: E0508 00:29:09.244872 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.245001 kubelet[4251]: E0508 00:29:09.244994 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.245024 kubelet[4251]: W0508 00:29:09.245001 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.245024 kubelet[4251]: E0508 00:29:09.245007 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.245146 kubelet[4251]: E0508 00:29:09.245138 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.245168 kubelet[4251]: W0508 00:29:09.245146 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.245168 kubelet[4251]: E0508 00:29:09.245153 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.245287 kubelet[4251]: E0508 00:29:09.245281 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.245309 kubelet[4251]: W0508 00:29:09.245287 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.245309 kubelet[4251]: E0508 00:29:09.245293 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.247653 kubelet[4251]: E0508 00:29:09.247639 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.247673 kubelet[4251]: W0508 00:29:09.247654 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.247673 kubelet[4251]: E0508 00:29:09.247666 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.247717 kubelet[4251]: I0508 00:29:09.247689 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e48e3e91-8128-422e-a541-9325080112a1-socket-dir\") pod \"csi-node-driver-qllhv\" (UID: \"e48e3e91-8128-422e-a541-9325080112a1\") " pod="calico-system/csi-node-driver-qllhv" May 8 00:29:09.247887 kubelet[4251]: E0508 00:29:09.247873 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.247887 kubelet[4251]: W0508 00:29:09.247884 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.247932 kubelet[4251]: E0508 00:29:09.247896 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.247932 kubelet[4251]: I0508 00:29:09.247910 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e48e3e91-8128-422e-a541-9325080112a1-varrun\") pod \"csi-node-driver-qllhv\" (UID: \"e48e3e91-8128-422e-a541-9325080112a1\") " pod="calico-system/csi-node-driver-qllhv" May 8 00:29:09.248084 kubelet[4251]: E0508 00:29:09.248072 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.248112 kubelet[4251]: W0508 00:29:09.248084 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.248112 kubelet[4251]: E0508 00:29:09.248094 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.248153 kubelet[4251]: I0508 00:29:09.248116 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2wg\" (UniqueName: \"kubernetes.io/projected/e48e3e91-8128-422e-a541-9325080112a1-kube-api-access-5z2wg\") pod \"csi-node-driver-qllhv\" (UID: \"e48e3e91-8128-422e-a541-9325080112a1\") " pod="calico-system/csi-node-driver-qllhv" May 8 00:29:09.248288 kubelet[4251]: E0508 00:29:09.248275 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.248288 kubelet[4251]: W0508 00:29:09.248285 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.248333 kubelet[4251]: E0508 00:29:09.248296 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.248333 kubelet[4251]: I0508 00:29:09.248309 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e48e3e91-8128-422e-a541-9325080112a1-kubelet-dir\") pod \"csi-node-driver-qllhv\" (UID: \"e48e3e91-8128-422e-a541-9325080112a1\") " pod="calico-system/csi-node-driver-qllhv" May 8 00:29:09.248499 kubelet[4251]: E0508 00:29:09.248483 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.248521 kubelet[4251]: W0508 00:29:09.248498 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.248521 kubelet[4251]: E0508 00:29:09.248514 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.248722 kubelet[4251]: E0508 00:29:09.248711 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.248722 kubelet[4251]: W0508 00:29:09.248719 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.248763 kubelet[4251]: E0508 00:29:09.248729 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.248937 kubelet[4251]: E0508 00:29:09.248927 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.248937 kubelet[4251]: W0508 00:29:09.248935 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.248975 kubelet[4251]: E0508 00:29:09.248946 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.249160 kubelet[4251]: E0508 00:29:09.249153 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.249188 kubelet[4251]: W0508 00:29:09.249161 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.249188 kubelet[4251]: E0508 00:29:09.249171 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.249393 kubelet[4251]: E0508 00:29:09.249385 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.249415 kubelet[4251]: W0508 00:29:09.249393 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.249415 kubelet[4251]: E0508 00:29:09.249403 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.249604 kubelet[4251]: E0508 00:29:09.249597 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.249626 kubelet[4251]: W0508 00:29:09.249604 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.249645 kubelet[4251]: E0508 00:29:09.249620 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.249742 kubelet[4251]: E0508 00:29:09.249734 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.249765 kubelet[4251]: W0508 00:29:09.249742 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.249765 kubelet[4251]: E0508 00:29:09.249758 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.249804 kubelet[4251]: I0508 00:29:09.249780 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e48e3e91-8128-422e-a541-9325080112a1-registration-dir\") pod \"csi-node-driver-qllhv\" (UID: \"e48e3e91-8128-422e-a541-9325080112a1\") " pod="calico-system/csi-node-driver-qllhv" May 8 00:29:09.249896 kubelet[4251]: E0508 00:29:09.249888 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.249896 kubelet[4251]: W0508 00:29:09.249895 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.249937 kubelet[4251]: E0508 00:29:09.249905 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.250128 kubelet[4251]: E0508 00:29:09.250116 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.250149 kubelet[4251]: W0508 00:29:09.250129 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.250149 kubelet[4251]: E0508 00:29:09.250142 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.250365 kubelet[4251]: E0508 00:29:09.250357 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.250365 kubelet[4251]: W0508 00:29:09.250365 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.250406 kubelet[4251]: E0508 00:29:09.250371 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.250596 kubelet[4251]: E0508 00:29:09.250588 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.250618 kubelet[4251]: W0508 00:29:09.250596 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.250618 kubelet[4251]: E0508 00:29:09.250603 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.323021 containerd[2726]: time="2025-05-08T00:29:09.322935903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-858bcc7475-v6pfz,Uid:e3ffa4da-f787-4841-bb42-177e5a214190,Namespace:calico-system,Attempt:0,}" May 8 00:29:09.341644 containerd[2726]: time="2025-05-08T00:29:09.341581050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ncdtd,Uid:bdd4679d-f51b-4b1a-955a-d1b148438dee,Namespace:calico-system,Attempt:0,}" May 8 00:29:09.348151 containerd[2726]: time="2025-05-08T00:29:09.347805166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:09.348205 containerd[2726]: time="2025-05-08T00:29:09.348147392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:09.348205 containerd[2726]: time="2025-05-08T00:29:09.348158633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:09.348247 containerd[2726]: time="2025-05-08T00:29:09.348225958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:09.351102 kubelet[4251]: E0508 00:29:09.351086 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.351133 kubelet[4251]: W0508 00:29:09.351103 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.351133 kubelet[4251]: E0508 00:29:09.351120 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.351347 kubelet[4251]: E0508 00:29:09.351339 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.351371 kubelet[4251]: W0508 00:29:09.351347 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.351371 kubelet[4251]: E0508 00:29:09.351358 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.351581 kubelet[4251]: E0508 00:29:09.351573 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.351604 kubelet[4251]: W0508 00:29:09.351581 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.351604 kubelet[4251]: E0508 00:29:09.351591 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.351808 kubelet[4251]: E0508 00:29:09.351801 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.351831 kubelet[4251]: W0508 00:29:09.351810 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.351831 kubelet[4251]: E0508 00:29:09.351820 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.352082 kubelet[4251]: E0508 00:29:09.352074 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.352102 kubelet[4251]: W0508 00:29:09.352082 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.352102 kubelet[4251]: E0508 00:29:09.352093 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.352319 kubelet[4251]: E0508 00:29:09.352311 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.352342 kubelet[4251]: W0508 00:29:09.352321 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.352342 kubelet[4251]: E0508 00:29:09.352332 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.352531 kubelet[4251]: E0508 00:29:09.352524 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.352553 kubelet[4251]: W0508 00:29:09.352531 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.352575 kubelet[4251]: E0508 00:29:09.352552 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.352733 kubelet[4251]: E0508 00:29:09.352726 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.352758 kubelet[4251]: W0508 00:29:09.352732 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.352758 kubelet[4251]: E0508 00:29:09.352747 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.352938 kubelet[4251]: E0508 00:29:09.352931 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.352960 kubelet[4251]: W0508 00:29:09.352937 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.352960 kubelet[4251]: E0508 00:29:09.352951 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.353134 kubelet[4251]: E0508 00:29:09.353127 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.353161 kubelet[4251]: W0508 00:29:09.353134 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.353185 kubelet[4251]: E0508 00:29:09.353161 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.353223 containerd[2726]: time="2025-05-08T00:29:09.352902436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:09.353244 containerd[2726]: time="2025-05-08T00:29:09.353219180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:09.353244 containerd[2726]: time="2025-05-08T00:29:09.353231221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:09.353312 containerd[2726]: time="2025-05-08T00:29:09.353297746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:09.353336 kubelet[4251]: E0508 00:29:09.353330 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.353358 kubelet[4251]: W0508 00:29:09.353336 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.353378 kubelet[4251]: E0508 00:29:09.353354 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.353541 kubelet[4251]: E0508 00:29:09.353533 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.353563 kubelet[4251]: W0508 00:29:09.353540 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.353563 kubelet[4251]: E0508 00:29:09.353550 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.353857 kubelet[4251]: E0508 00:29:09.353845 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.353877 kubelet[4251]: W0508 00:29:09.353858 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.353877 kubelet[4251]: E0508 00:29:09.353873 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.354105 kubelet[4251]: E0508 00:29:09.354095 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.354126 kubelet[4251]: W0508 00:29:09.354106 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.354126 kubelet[4251]: E0508 00:29:09.354118 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.354338 kubelet[4251]: E0508 00:29:09.354330 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.354361 kubelet[4251]: W0508 00:29:09.354338 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.354361 kubelet[4251]: E0508 00:29:09.354353 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.354573 kubelet[4251]: E0508 00:29:09.354565 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.354593 kubelet[4251]: W0508 00:29:09.354573 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.354615 kubelet[4251]: E0508 00:29:09.354599 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.354756 kubelet[4251]: E0508 00:29:09.354748 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.354778 kubelet[4251]: W0508 00:29:09.354755 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.354799 kubelet[4251]: E0508 00:29:09.354774 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.354995 kubelet[4251]: E0508 00:29:09.354986 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.355020 kubelet[4251]: W0508 00:29:09.354995 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.355020 kubelet[4251]: E0508 00:29:09.355013 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.355171 kubelet[4251]: E0508 00:29:09.355163 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.355196 kubelet[4251]: W0508 00:29:09.355171 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.355196 kubelet[4251]: E0508 00:29:09.355188 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.355347 kubelet[4251]: E0508 00:29:09.355339 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.355367 kubelet[4251]: W0508 00:29:09.355346 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.355367 kubelet[4251]: E0508 00:29:09.355360 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.355673 kubelet[4251]: E0508 00:29:09.355662 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.355695 kubelet[4251]: W0508 00:29:09.355673 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.355695 kubelet[4251]: E0508 00:29:09.355687 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.355934 kubelet[4251]: E0508 00:29:09.355921 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.355955 kubelet[4251]: W0508 00:29:09.355934 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.355955 kubelet[4251]: E0508 00:29:09.355948 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.356175 kubelet[4251]: E0508 00:29:09.356162 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.356175 kubelet[4251]: W0508 00:29:09.356171 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.356239 kubelet[4251]: E0508 00:29:09.356182 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.356407 kubelet[4251]: E0508 00:29:09.356397 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.356440 kubelet[4251]: W0508 00:29:09.356406 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.356440 kubelet[4251]: E0508 00:29:09.356417 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.356614 kubelet[4251]: E0508 00:29:09.356606 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.356636 kubelet[4251]: W0508 00:29:09.356614 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.356636 kubelet[4251]: E0508 00:29:09.356622 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.364379 kubelet[4251]: E0508 00:29:09.364365 4251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:29:09.364401 kubelet[4251]: W0508 00:29:09.364380 4251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:29:09.364401 kubelet[4251]: E0508 00:29:09.364392 4251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:29:09.375170 systemd[1]: Started cri-containerd-1b552c573b44a11cf830acbea1bf86481f672382a5ee3d4b58a3e35c34608ea6.scope - libcontainer container 1b552c573b44a11cf830acbea1bf86481f672382a5ee3d4b58a3e35c34608ea6. May 8 00:29:09.377575 systemd[1]: Started cri-containerd-9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d.scope - libcontainer container 9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d. May 8 00:29:09.392739 containerd[2726]: time="2025-05-08T00:29:09.392708882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ncdtd,Uid:bdd4679d-f51b-4b1a-955a-d1b148438dee,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\"" May 8 00:29:09.393786 containerd[2726]: time="2025-05-08T00:29:09.393766963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:29:09.397888 containerd[2726]: time="2025-05-08T00:29:09.397860156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-858bcc7475-v6pfz,Uid:e3ffa4da-f787-4841-bb42-177e5a214190,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b552c573b44a11cf830acbea1bf86481f672382a5ee3d4b58a3e35c34608ea6\"" May 8 00:29:09.911127 containerd[2726]: time="2025-05-08T00:29:09.911063306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 8 00:29:09.911181 containerd[2726]: time="2025-05-08T00:29:09.911093709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:09.911902 containerd[2726]: time="2025-05-08T00:29:09.911878209Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:09.913615 containerd[2726]: time="2025-05-08T00:29:09.913589180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:09.914387 containerd[2726]: time="2025-05-08T00:29:09.914363159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 520.570274ms" May 8 00:29:09.914419 containerd[2726]: time="2025-05-08T00:29:09.914391841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:29:09.915136 containerd[2726]: time="2025-05-08T00:29:09.915118417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:29:09.916073 containerd[2726]: time="2025-05-08T00:29:09.916043567Z" level=info msg="CreateContainer within sandbox \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:29:09.934471 containerd[2726]: time="2025-05-08T00:29:09.934440055Z" level=info msg="CreateContainer within sandbox \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3\"" May 8 00:29:09.934864 containerd[2726]: time="2025-05-08T00:29:09.934837965Z" level=info msg="StartContainer for \"0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3\"" May 8 00:29:09.966242 systemd[1]: Started cri-containerd-0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3.scope - libcontainer container 0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3. May 8 00:29:09.986109 containerd[2726]: time="2025-05-08T00:29:09.986073246Z" level=info msg="StartContainer for \"0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3\" returns successfully" May 8 00:29:09.998831 systemd[1]: cri-containerd-0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3.scope: Deactivated successfully. May 8 00:29:10.097680 containerd[2726]: time="2025-05-08T00:29:10.097630992Z" level=info msg="shim disconnected" id=0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3 namespace=k8s.io May 8 00:29:10.097762 containerd[2726]: time="2025-05-08T00:29:10.097679636Z" level=warning msg="cleaning up after shim disconnected" id=0542d109e286d40175bbbdf2c59a15b20cf2fb0d551643d3c16890f7a19837e3 namespace=k8s.io May 8 00:29:10.097762 containerd[2726]: time="2025-05-08T00:29:10.097687516Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:29:10.106381 containerd[2726]: time="2025-05-08T00:29:10.106355745Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:29:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:29:10.516009 containerd[2726]: time="2025-05-08T00:29:10.515958073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:10.516389 containerd[2726]: time="2025-05-08T00:29:10.516025638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 8 00:29:10.516745 containerd[2726]: time="2025-05-08T00:29:10.516727569Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:10.518468 containerd[2726]: time="2025-05-08T00:29:10.518444693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:10.519219 containerd[2726]: time="2025-05-08T00:29:10.519198948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 604.056649ms" May 8 00:29:10.519253 containerd[2726]: time="2025-05-08T00:29:10.519225430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 8 00:29:10.524605 containerd[2726]: time="2025-05-08T00:29:10.524579258Z" level=info msg="CreateContainer within sandbox \"1b552c573b44a11cf830acbea1bf86481f672382a5ee3d4b58a3e35c34608ea6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:29:10.529545 containerd[2726]: time="2025-05-08T00:29:10.529508335Z" level=info msg="CreateContainer within sandbox \"1b552c573b44a11cf830acbea1bf86481f672382a5ee3d4b58a3e35c34608ea6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"082588598f3d1300bede7963e2b2d6abb2e2c01ee45f0e42df342a9c426fee0d\"" May 8 00:29:10.529980 containerd[2726]: time="2025-05-08T00:29:10.529804517Z" level=info msg="StartContainer for \"082588598f3d1300bede7963e2b2d6abb2e2c01ee45f0e42df342a9c426fee0d\"" May 8 00:29:10.553173 systemd[1]: Started cri-containerd-082588598f3d1300bede7963e2b2d6abb2e2c01ee45f0e42df342a9c426fee0d.scope - libcontainer container 082588598f3d1300bede7963e2b2d6abb2e2c01ee45f0e42df342a9c426fee0d. May 8 00:29:10.577578 containerd[2726]: time="2025-05-08T00:29:10.577545977Z" level=info msg="StartContainer for \"082588598f3d1300bede7963e2b2d6abb2e2c01ee45f0e42df342a9c426fee0d\" returns successfully" May 8 00:29:10.665069 kubelet[4251]: E0508 00:29:10.665024 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qllhv" podUID="e48e3e91-8128-422e-a541-9325080112a1" May 8 00:29:10.694231 containerd[2726]: time="2025-05-08T00:29:10.694179431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:29:10.728330 kubelet[4251]: I0508 00:29:10.728287 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-858bcc7475-v6pfz" podStartSLOduration=0.607080364 podStartE2EDuration="1.728270422s" podCreationTimestamp="2025-05-08 00:29:09 +0000 UTC" firstStartedPulling="2025-05-08 00:29:09.398601613 +0000 UTC m=+15.806370205" lastFinishedPulling="2025-05-08 00:29:10.519791711 +0000 UTC m=+16.927560263" observedRunningTime="2025-05-08 00:29:10.728217698 +0000 UTC m=+17.135986330" watchObservedRunningTime="2025-05-08 00:29:10.728270422 +0000 UTC m=+17.136039014" May 8 00:29:11.695957 kubelet[4251]: I0508 00:29:11.695929 4251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:29:11.985123 containerd[2726]: time="2025-05-08T00:29:11.985024183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:11.985123 containerd[2726]: time="2025-05-08T00:29:11.985076027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 00:29:11.985829 containerd[2726]: time="2025-05-08T00:29:11.985805437Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:11.987624 containerd[2726]: time="2025-05-08T00:29:11.987606920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:11.988359 containerd[2726]: time="2025-05-08T00:29:11.988338051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.294125258s" May 8 00:29:11.988385 containerd[2726]: time="2025-05-08T00:29:11.988364572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:29:11.990029 containerd[2726]: time="2025-05-08T00:29:11.990007205Z" level=info msg="CreateContainer within sandbox \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:29:11.995806 containerd[2726]: time="2025-05-08T00:29:11.995782282Z" level=info msg="CreateContainer within sandbox \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d\"" May 8 00:29:11.996146 containerd[2726]: time="2025-05-08T00:29:11.996125866Z" level=info msg="StartContainer for \"f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d\"" May 8 00:29:12.032238 systemd[1]: Started cri-containerd-f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d.scope - libcontainer container f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d. May 8 00:29:12.052756 containerd[2726]: time="2025-05-08T00:29:12.052724450Z" level=info msg="StartContainer for \"f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d\" returns successfully" May 8 00:29:12.429887 systemd[1]: cri-containerd-f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d.scope: Deactivated successfully. May 8 00:29:12.430219 systemd[1]: cri-containerd-f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d.scope: Consumed 830ms CPU time, 179.6M memory peak, 150.3M written to disk. May 8 00:29:12.443293 kubelet[4251]: I0508 00:29:12.443251 4251 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:29:12.443777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d-rootfs.mount: Deactivated successfully. May 8 00:29:12.460439 systemd[1]: Created slice kubepods-burstable-pod8583ab32_415e_4ed6_826b_5de624d0cf2d.slice - libcontainer container kubepods-burstable-pod8583ab32_415e_4ed6_826b_5de624d0cf2d.slice. May 8 00:29:12.464990 systemd[1]: Created slice kubepods-burstable-podae4b5035_3b5c_43b7_9e5c_b67545c7be07.slice - libcontainer container kubepods-burstable-podae4b5035_3b5c_43b7_9e5c_b67545c7be07.slice. May 8 00:29:12.469148 systemd[1]: Created slice kubepods-besteffort-podd3782ab7_8e17_40bb_8afe_070d834a6209.slice - libcontainer container kubepods-besteffort-podd3782ab7_8e17_40bb_8afe_070d834a6209.slice. May 8 00:29:12.471938 kubelet[4251]: I0508 00:29:12.471913 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d49af69-296d-4b24-a562-fc4412202a83-calico-apiserver-certs\") pod \"calico-apiserver-5bd564c9db-fvbbx\" (UID: \"8d49af69-296d-4b24-a562-fc4412202a83\") " pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:12.472044 kubelet[4251]: I0508 00:29:12.471949 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8583ab32-415e-4ed6-826b-5de624d0cf2d-config-volume\") pod \"coredns-668d6bf9bc-524hx\" (UID: \"8583ab32-415e-4ed6-826b-5de624d0cf2d\") " pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:12.472044 kubelet[4251]: I0508 00:29:12.471969 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f17d894-ad9b-4601-911e-d2ddf3fb666b-calico-apiserver-certs\") pod \"calico-apiserver-5bd564c9db-47dls\" (UID: \"2f17d894-ad9b-4601-911e-d2ddf3fb666b\") " pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:12.472044 kubelet[4251]: I0508 00:29:12.471988 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3782ab7-8e17-40bb-8afe-070d834a6209-tigera-ca-bundle\") pod \"calico-kube-controllers-5bd9f6d6cd-fck25\" (UID: \"d3782ab7-8e17-40bb-8afe-070d834a6209\") " pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:12.472130 kubelet[4251]: I0508 00:29:12.472036 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjzhb\" (UniqueName: \"kubernetes.io/projected/2f17d894-ad9b-4601-911e-d2ddf3fb666b-kube-api-access-qjzhb\") pod \"calico-apiserver-5bd564c9db-47dls\" (UID: \"2f17d894-ad9b-4601-911e-d2ddf3fb666b\") " pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:12.472130 kubelet[4251]: I0508 00:29:12.472095 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfnf\" (UniqueName: \"kubernetes.io/projected/8d49af69-296d-4b24-a562-fc4412202a83-kube-api-access-jlfnf\") pod \"calico-apiserver-5bd564c9db-fvbbx\" (UID: \"8d49af69-296d-4b24-a562-fc4412202a83\") " pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:12.472202 kubelet[4251]: I0508 00:29:12.472125 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae4b5035-3b5c-43b7-9e5c-b67545c7be07-config-volume\") pod \"coredns-668d6bf9bc-j49s4\" (UID: \"ae4b5035-3b5c-43b7-9e5c-b67545c7be07\") " pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:12.472202 kubelet[4251]: I0508 00:29:12.472155 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgts\" (UniqueName: \"kubernetes.io/projected/ae4b5035-3b5c-43b7-9e5c-b67545c7be07-kube-api-access-9cgts\") pod \"coredns-668d6bf9bc-j49s4\" (UID: \"ae4b5035-3b5c-43b7-9e5c-b67545c7be07\") " pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:12.472243 kubelet[4251]: I0508 00:29:12.472193 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgq57\" (UniqueName: \"kubernetes.io/projected/d3782ab7-8e17-40bb-8afe-070d834a6209-kube-api-access-bgq57\") pod \"calico-kube-controllers-5bd9f6d6cd-fck25\" (UID: \"d3782ab7-8e17-40bb-8afe-070d834a6209\") " pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:12.472243 kubelet[4251]: I0508 00:29:12.472218 4251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vsmc\" (UniqueName: \"kubernetes.io/projected/8583ab32-415e-4ed6-826b-5de624d0cf2d-kube-api-access-4vsmc\") pod \"coredns-668d6bf9bc-524hx\" (UID: \"8583ab32-415e-4ed6-826b-5de624d0cf2d\") " pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:12.472703 systemd[1]: Created slice kubepods-besteffort-pod8d49af69_296d_4b24_a562_fc4412202a83.slice - libcontainer container kubepods-besteffort-pod8d49af69_296d_4b24_a562_fc4412202a83.slice. May 8 00:29:12.476335 systemd[1]: Created slice kubepods-besteffort-pod2f17d894_ad9b_4601_911e_d2ddf3fb666b.slice - libcontainer container kubepods-besteffort-pod2f17d894_ad9b_4601_911e_d2ddf3fb666b.slice. May 8 00:29:12.663764 containerd[2726]: time="2025-05-08T00:29:12.663709893Z" level=info msg="shim disconnected" id=f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d namespace=k8s.io May 8 00:29:12.663764 containerd[2726]: time="2025-05-08T00:29:12.663764576Z" level=warning msg="cleaning up after shim disconnected" id=f38bbf5826d3d121ce94e4772f92386c1036756bebf8f509d0739372b3224d1d namespace=k8s.io May 8 00:29:12.663865 containerd[2726]: time="2025-05-08T00:29:12.663772097Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:29:12.668752 systemd[1]: Created slice kubepods-besteffort-pode48e3e91_8128_422e_a541_9325080112a1.slice - libcontainer container kubepods-besteffort-pode48e3e91_8128_422e_a541_9325080112a1.slice. May 8 00:29:12.670431 containerd[2726]: time="2025-05-08T00:29:12.670405209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:0,}" May 8 00:29:12.699661 containerd[2726]: time="2025-05-08T00:29:12.699600711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:29:12.731505 containerd[2726]: time="2025-05-08T00:29:12.731471987Z" level=error msg="Failed to destroy network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.731822 containerd[2726]: time="2025-05-08T00:29:12.731798329Z" level=error msg="encountered an error cleaning up failed sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.731877 containerd[2726]: time="2025-05-08T00:29:12.731858452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.732122 kubelet[4251]: E0508 00:29:12.732046 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.732355 kubelet[4251]: E0508 00:29:12.732162 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qllhv" May 8 00:29:12.732355 kubelet[4251]: E0508 00:29:12.732181 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qllhv" May 8 00:29:12.732355 kubelet[4251]: E0508 00:29:12.732220 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qllhv_calico-system(e48e3e91-8128-422e-a541-9325080112a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qllhv_calico-system(e48e3e91-8128-422e-a541-9325080112a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qllhv" podUID="e48e3e91-8128-422e-a541-9325080112a1" May 8 00:29:12.769426 containerd[2726]: time="2025-05-08T00:29:12.769400178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:0,}" May 8 00:29:12.769469 containerd[2726]: time="2025-05-08T00:29:12.769418779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:0,}" May 8 00:29:12.771900 containerd[2726]: time="2025-05-08T00:29:12.771877580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:0,}" May 8 00:29:12.774537 containerd[2726]: time="2025-05-08T00:29:12.774513591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:0,}" May 8 00:29:12.779615 containerd[2726]: time="2025-05-08T00:29:12.779566040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:0,}" May 8 00:29:12.815654 containerd[2726]: time="2025-05-08T00:29:12.815602468Z" level=error msg="Failed to destroy network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.815971 containerd[2726]: time="2025-05-08T00:29:12.815948331Z" level=error msg="encountered an error cleaning up failed sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816036 containerd[2726]: time="2025-05-08T00:29:12.816020175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816139 containerd[2726]: time="2025-05-08T00:29:12.816026576Z" level=error msg="Failed to destroy network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816293 kubelet[4251]: E0508 00:29:12.816254 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816333 kubelet[4251]: E0508 00:29:12.816321 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:12.816358 kubelet[4251]: E0508 00:29:12.816340 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:12.816406 kubelet[4251]: E0508 00:29:12.816383 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-524hx_kube-system(8583ab32-415e-4ed6-826b-5de624d0cf2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-524hx_kube-system(8583ab32-415e-4ed6-826b-5de624d0cf2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-524hx" podUID="8583ab32-415e-4ed6-826b-5de624d0cf2d" May 8 00:29:12.816467 containerd[2726]: time="2025-05-08T00:29:12.816430562Z" level=error msg="Failed to destroy network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816492 containerd[2726]: time="2025-05-08T00:29:12.816470205Z" level=error msg="encountered an error cleaning up failed sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816541 containerd[2726]: time="2025-05-08T00:29:12.816522968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816675 kubelet[4251]: E0508 00:29:12.816654 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816745 kubelet[4251]: E0508 00:29:12.816689 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:12.816745 kubelet[4251]: E0508 00:29:12.816706 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:12.816789 kubelet[4251]: E0508 00:29:12.816736 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j49s4_kube-system(ae4b5035-3b5c-43b7-9e5c-b67545c7be07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j49s4_kube-system(ae4b5035-3b5c-43b7-9e5c-b67545c7be07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j49s4" podUID="ae4b5035-3b5c-43b7-9e5c-b67545c7be07" May 8 00:29:12.816827 containerd[2726]: time="2025-05-08T00:29:12.816740902Z" level=error msg="encountered an error cleaning up failed sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.816827 containerd[2726]: time="2025-05-08T00:29:12.816788585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.817055 kubelet[4251]: E0508 00:29:12.816970 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.817055 kubelet[4251]: E0508 00:29:12.817007 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:12.817055 kubelet[4251]: E0508 00:29:12.817020 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:12.817141 kubelet[4251]: E0508 00:29:12.817046 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd9f6d6cd-fck25_calico-system(d3782ab7-8e17-40bb-8afe-070d834a6209)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd9f6d6cd-fck25_calico-system(d3782ab7-8e17-40bb-8afe-070d834a6209)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" podUID="d3782ab7-8e17-40bb-8afe-070d834a6209" May 8 00:29:12.820329 containerd[2726]: time="2025-05-08T00:29:12.820285373Z" level=error msg="Failed to destroy network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.820687 containerd[2726]: time="2025-05-08T00:29:12.820664918Z" level=error msg="encountered an error cleaning up failed sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.820730 containerd[2726]: time="2025-05-08T00:29:12.820711921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.820858 kubelet[4251]: E0508 00:29:12.820838 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.820894 kubelet[4251]: E0508 00:29:12.820869 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:12.820894 kubelet[4251]: E0508 00:29:12.820884 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:12.820941 kubelet[4251]: E0508 00:29:12.820913 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd564c9db-fvbbx_calico-apiserver(8d49af69-296d-4b24-a562-fc4412202a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd564c9db-fvbbx_calico-apiserver(8d49af69-296d-4b24-a562-fc4412202a83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" podUID="8d49af69-296d-4b24-a562-fc4412202a83" May 8 00:29:12.825765 containerd[2726]: time="2025-05-08T00:29:12.825739008Z" level=error msg="Failed to destroy network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.826042 containerd[2726]: time="2025-05-08T00:29:12.826020907Z" level=error msg="encountered an error cleaning up failed sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.826130 containerd[2726]: time="2025-05-08T00:29:12.826075030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.826222 kubelet[4251]: E0508 00:29:12.826194 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:12.826253 kubelet[4251]: E0508 00:29:12.826235 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:12.826275 kubelet[4251]: E0508 00:29:12.826250 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:12.826299 kubelet[4251]: E0508 00:29:12.826279 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd564c9db-47dls_calico-apiserver(2f17d894-ad9b-4601-911e-d2ddf3fb666b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd564c9db-47dls_calico-apiserver(2f17d894-ad9b-4601-911e-d2ddf3fb666b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" podUID="2f17d894-ad9b-4601-911e-d2ddf3fb666b" May 8 00:29:13.700360 kubelet[4251]: I0508 00:29:13.700330 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188" May 8 00:29:13.700794 containerd[2726]: time="2025-05-08T00:29:13.700764804Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" May 8 00:29:13.701028 containerd[2726]: time="2025-05-08T00:29:13.700922853Z" level=info msg="Ensure that sandbox b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188 in task-service has been cleanup successfully" May 8 00:29:13.701076 kubelet[4251]: I0508 00:29:13.701062 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888" May 8 00:29:13.701114 containerd[2726]: time="2025-05-08T00:29:13.701096704Z" level=info msg="TearDown network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" successfully" May 8 00:29:13.701114 containerd[2726]: time="2025-05-08T00:29:13.701109625Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" returns successfully" May 8 00:29:13.701500 containerd[2726]: time="2025-05-08T00:29:13.701484168Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" May 8 00:29:13.701527 containerd[2726]: time="2025-05-08T00:29:13.701502769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:1,}" May 8 00:29:13.701624 containerd[2726]: time="2025-05-08T00:29:13.701612776Z" level=info msg="Ensure that sandbox 12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888 in task-service has been cleanup successfully" May 8 00:29:13.701724 kubelet[4251]: I0508 00:29:13.701710 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf" May 8 00:29:13.701772 containerd[2726]: time="2025-05-08T00:29:13.701758945Z" level=info msg="TearDown network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" successfully" May 8 00:29:13.701801 containerd[2726]: time="2025-05-08T00:29:13.701772626Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" returns successfully" May 8 00:29:13.702030 containerd[2726]: time="2025-05-08T00:29:13.702006240Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" May 8 00:29:13.702170 containerd[2726]: time="2025-05-08T00:29:13.702157610Z" level=info msg="Ensure that sandbox 373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf in task-service has been cleanup successfully" May 8 00:29:13.702240 containerd[2726]: time="2025-05-08T00:29:13.702221014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:1,}" May 8 00:29:13.702325 containerd[2726]: time="2025-05-08T00:29:13.702308619Z" level=info msg="TearDown network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" successfully" May 8 00:29:13.702359 containerd[2726]: time="2025-05-08T00:29:13.702326180Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" returns successfully" May 8 00:29:13.702541 kubelet[4251]: I0508 00:29:13.702529 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3" May 8 00:29:13.702921 containerd[2726]: time="2025-05-08T00:29:13.702639039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:1,}" May 8 00:29:13.702921 containerd[2726]: time="2025-05-08T00:29:13.702886215Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" May 8 00:29:13.702663 systemd[1]: run-netns-cni\x2d6bea3652\x2db235\x2dc575\x2d07fd\x2df4dcae34dc86.mount: Deactivated successfully. May 8 00:29:13.703382 containerd[2726]: time="2025-05-08T00:29:13.703028903Z" level=info msg="Ensure that sandbox 8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3 in task-service has been cleanup successfully" May 8 00:29:13.703382 containerd[2726]: time="2025-05-08T00:29:13.703184273Z" level=info msg="TearDown network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" successfully" May 8 00:29:13.703382 containerd[2726]: time="2025-05-08T00:29:13.703196594Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" returns successfully" May 8 00:29:13.703455 kubelet[4251]: I0508 00:29:13.703262 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0" May 8 00:29:13.703587 containerd[2726]: time="2025-05-08T00:29:13.703566777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:1,}" May 8 00:29:13.703612 containerd[2726]: time="2025-05-08T00:29:13.703586738Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" May 8 00:29:13.703728 containerd[2726]: time="2025-05-08T00:29:13.703715226Z" level=info msg="Ensure that sandbox 9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0 in task-service has been cleanup successfully" May 8 00:29:13.703885 containerd[2726]: time="2025-05-08T00:29:13.703872196Z" level=info msg="TearDown network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" successfully" May 8 00:29:13.703910 containerd[2726]: time="2025-05-08T00:29:13.703886997Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" returns successfully" May 8 00:29:13.703947 kubelet[4251]: I0508 00:29:13.703932 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120" May 8 00:29:13.704199 containerd[2726]: time="2025-05-08T00:29:13.704181855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:1,}" May 8 00:29:13.704300 containerd[2726]: time="2025-05-08T00:29:13.704284941Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" May 8 00:29:13.704421 containerd[2726]: time="2025-05-08T00:29:13.704409989Z" level=info msg="Ensure that sandbox bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120 in task-service has been cleanup successfully" May 8 00:29:13.704557 containerd[2726]: time="2025-05-08T00:29:13.704546037Z" level=info msg="TearDown network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" successfully" May 8 00:29:13.704585 containerd[2726]: time="2025-05-08T00:29:13.704557598Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" returns successfully" May 8 00:29:13.704623 systemd[1]: run-netns-cni\x2dcec20f93\x2d0e4d\x2dde90\x2d4a78\x2db33f5afdd6e5.mount: Deactivated successfully. May 8 00:29:13.704705 systemd[1]: run-netns-cni\x2d7a7fe7c9\x2dd760\x2da871\x2dcb81\x2d49f748564168.mount: Deactivated successfully. May 8 00:29:13.704755 systemd[1]: run-netns-cni\x2d7648c541\x2d575b\x2d2b20\x2d6994\x2d0880d6b562fc.mount: Deactivated successfully. May 8 00:29:13.704854 containerd[2726]: time="2025-05-08T00:29:13.704836775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:1,}" May 8 00:29:13.707710 systemd[1]: run-netns-cni\x2dd2bbe809\x2d333b\x2dba2e\x2d7071\x2d478cd888d44b.mount: Deactivated successfully. May 8 00:29:13.707786 systemd[1]: run-netns-cni\x2d58bc818f\x2dd73e\x2d714b\x2d661c\x2d06de573e5df7.mount: Deactivated successfully. May 8 00:29:13.751000 containerd[2726]: time="2025-05-08T00:29:13.750945106Z" level=error msg="Failed to destroy network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751162 containerd[2726]: time="2025-05-08T00:29:13.750945346Z" level=error msg="Failed to destroy network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751350 containerd[2726]: time="2025-05-08T00:29:13.751320369Z" level=error msg="Failed to destroy network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751496 containerd[2726]: time="2025-05-08T00:29:13.751474218Z" level=error msg="encountered an error cleaning up failed sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751528 containerd[2726]: time="2025-05-08T00:29:13.751501180Z" level=error msg="encountered an error cleaning up failed sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751644 containerd[2726]: time="2025-05-08T00:29:13.751541342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751644 containerd[2726]: time="2025-05-08T00:29:13.751552703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751722 containerd[2726]: time="2025-05-08T00:29:13.751649829Z" level=error msg="encountered an error cleaning up failed sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751722 containerd[2726]: time="2025-05-08T00:29:13.751694112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.751771 kubelet[4251]: E0508 00:29:13.751725 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752003 kubelet[4251]: E0508 00:29:13.751784 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:13.752003 kubelet[4251]: E0508 00:29:13.751799 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752003 kubelet[4251]: E0508 00:29:13.751838 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qllhv" May 8 00:29:13.752003 kubelet[4251]: E0508 00:29:13.751857 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qllhv" May 8 00:29:13.752106 kubelet[4251]: E0508 00:29:13.751806 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:13.752106 kubelet[4251]: E0508 00:29:13.751891 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qllhv_calico-system(e48e3e91-8128-422e-a541-9325080112a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qllhv_calico-system(e48e3e91-8128-422e-a541-9325080112a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qllhv" podUID="e48e3e91-8128-422e-a541-9325080112a1" May 8 00:29:13.752106 kubelet[4251]: E0508 00:29:13.751921 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-524hx_kube-system(8583ab32-415e-4ed6-826b-5de624d0cf2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-524hx_kube-system(8583ab32-415e-4ed6-826b-5de624d0cf2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-524hx" podUID="8583ab32-415e-4ed6-826b-5de624d0cf2d" May 8 00:29:13.752198 kubelet[4251]: E0508 00:29:13.751801 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752198 kubelet[4251]: E0508 00:29:13.751984 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:13.752198 kubelet[4251]: E0508 00:29:13.751998 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:13.752262 kubelet[4251]: E0508 00:29:13.752023 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd564c9db-47dls_calico-apiserver(2f17d894-ad9b-4601-911e-d2ddf3fb666b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd564c9db-47dls_calico-apiserver(2f17d894-ad9b-4601-911e-d2ddf3fb666b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" podUID="2f17d894-ad9b-4601-911e-d2ddf3fb666b" May 8 00:29:13.752608 containerd[2726]: time="2025-05-08T00:29:13.752478800Z" level=error msg="Failed to destroy network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752821 containerd[2726]: time="2025-05-08T00:29:13.752799140Z" level=error msg="encountered an error cleaning up failed sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752860 containerd[2726]: time="2025-05-08T00:29:13.752844463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752905 containerd[2726]: time="2025-05-08T00:29:13.752802260Z" level=error msg="Failed to destroy network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.752977 kubelet[4251]: E0508 00:29:13.752956 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.753010 kubelet[4251]: E0508 00:29:13.752988 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:13.753010 kubelet[4251]: E0508 00:29:13.753005 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:13.753067 kubelet[4251]: E0508 00:29:13.753032 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd564c9db-fvbbx_calico-apiserver(8d49af69-296d-4b24-a562-fc4412202a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd564c9db-fvbbx_calico-apiserver(8d49af69-296d-4b24-a562-fc4412202a83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" podUID="8d49af69-296d-4b24-a562-fc4412202a83" May 8 00:29:13.753209 containerd[2726]: time="2025-05-08T00:29:13.753186764Z" level=error msg="encountered an error cleaning up failed sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.753240 containerd[2726]: time="2025-05-08T00:29:13.753228847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.753338 kubelet[4251]: E0508 00:29:13.753316 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.753381 kubelet[4251]: E0508 00:29:13.753367 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:13.753412 kubelet[4251]: E0508 00:29:13.753385 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:13.753435 kubelet[4251]: E0508 00:29:13.753420 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j49s4_kube-system(ae4b5035-3b5c-43b7-9e5c-b67545c7be07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j49s4_kube-system(ae4b5035-3b5c-43b7-9e5c-b67545c7be07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j49s4" podUID="ae4b5035-3b5c-43b7-9e5c-b67545c7be07" May 8 00:29:13.753688 containerd[2726]: time="2025-05-08T00:29:13.753666034Z" level=error msg="Failed to destroy network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.753956 containerd[2726]: time="2025-05-08T00:29:13.753934970Z" level=error msg="encountered an error cleaning up failed sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.753996 containerd[2726]: time="2025-05-08T00:29:13.753980893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.754163 kubelet[4251]: E0508 00:29:13.754134 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:13.754201 kubelet[4251]: E0508 00:29:13.754178 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:13.754201 kubelet[4251]: E0508 00:29:13.754196 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:13.754249 kubelet[4251]: E0508 00:29:13.754227 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd9f6d6cd-fck25_calico-system(d3782ab7-8e17-40bb-8afe-070d834a6209)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd9f6d6cd-fck25_calico-system(d3782ab7-8e17-40bb-8afe-070d834a6209)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" podUID="d3782ab7-8e17-40bb-8afe-070d834a6209" May 8 00:29:14.152526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a-shm.mount: Deactivated successfully. May 8 00:29:14.620110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724066196.mount: Deactivated successfully. May 8 00:29:14.637973 containerd[2726]: time="2025-05-08T00:29:14.637919709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 00:29:14.637973 containerd[2726]: time="2025-05-08T00:29:14.637919949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:14.638693 containerd[2726]: time="2025-05-08T00:29:14.638665113Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:14.640308 containerd[2726]: time="2025-05-08T00:29:14.640283168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:14.640967 containerd[2726]: time="2025-05-08T00:29:14.640940486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 1.941305893s" May 8 00:29:14.640999 containerd[2726]: time="2025-05-08T00:29:14.640971648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:29:14.646897 containerd[2726]: time="2025-05-08T00:29:14.646867594Z" level=info msg="CreateContainer within sandbox \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:29:14.658561 containerd[2726]: time="2025-05-08T00:29:14.658525279Z" level=info msg="CreateContainer within sandbox \"9fd8e589e4248a0ef71e542c7b98b4cb30fcb9086a09b92c2c0b33b5cff7672d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f8b0c97503bee9a40d609d3e1fed228f42df050a60c82887f2b4651d145f285c\"" May 8 00:29:14.658944 containerd[2726]: time="2025-05-08T00:29:14.658919822Z" level=info msg="StartContainer for \"f8b0c97503bee9a40d609d3e1fed228f42df050a60c82887f2b4651d145f285c\"" May 8 00:29:14.685163 systemd[1]: Started cri-containerd-f8b0c97503bee9a40d609d3e1fed228f42df050a60c82887f2b4651d145f285c.scope - libcontainer container f8b0c97503bee9a40d609d3e1fed228f42df050a60c82887f2b4651d145f285c. May 8 00:29:14.706478 kubelet[4251]: I0508 00:29:14.706450 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a" May 8 00:29:14.707027 containerd[2726]: time="2025-05-08T00:29:14.706999604Z" level=info msg="StopPodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\"" May 8 00:29:14.707249 containerd[2726]: time="2025-05-08T00:29:14.707159733Z" level=info msg="Ensure that sandbox d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a in task-service has been cleanup successfully" May 8 00:29:14.707463 containerd[2726]: time="2025-05-08T00:29:14.707440990Z" level=info msg="TearDown network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" successfully" May 8 00:29:14.707490 containerd[2726]: time="2025-05-08T00:29:14.707462631Z" level=info msg="StopPodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" returns successfully" May 8 00:29:14.707670 containerd[2726]: time="2025-05-08T00:29:14.707647762Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" May 8 00:29:14.707741 containerd[2726]: time="2025-05-08T00:29:14.707727727Z" level=info msg="TearDown network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" successfully" May 8 00:29:14.707766 containerd[2726]: time="2025-05-08T00:29:14.707742528Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" returns successfully" May 8 00:29:14.708066 containerd[2726]: time="2025-05-08T00:29:14.708044105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:2,}" May 8 00:29:14.720539 containerd[2726]: time="2025-05-08T00:29:14.720509277Z" level=info msg="StartContainer for \"f8b0c97503bee9a40d609d3e1fed228f42df050a60c82887f2b4651d145f285c\" returns successfully" May 8 00:29:14.720582 kubelet[4251]: I0508 00:29:14.720543 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921" May 8 00:29:14.721034 containerd[2726]: time="2025-05-08T00:29:14.721010706Z" level=info msg="StopPodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\"" May 8 00:29:14.721193 containerd[2726]: time="2025-05-08T00:29:14.721176956Z" level=info msg="Ensure that sandbox f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921 in task-service has been cleanup successfully" May 8 00:29:14.721379 containerd[2726]: time="2025-05-08T00:29:14.721361087Z" level=info msg="TearDown network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" successfully" May 8 00:29:14.721379 containerd[2726]: time="2025-05-08T00:29:14.721376248Z" level=info msg="StopPodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" returns successfully" May 8 00:29:14.721657 containerd[2726]: time="2025-05-08T00:29:14.721636583Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" May 8 00:29:14.721737 containerd[2726]: time="2025-05-08T00:29:14.721724028Z" level=info msg="TearDown network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" successfully" May 8 00:29:14.721737 containerd[2726]: time="2025-05-08T00:29:14.721735149Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" returns successfully" May 8 00:29:14.722124 containerd[2726]: time="2025-05-08T00:29:14.722105011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:2,}" May 8 00:29:14.723286 kubelet[4251]: I0508 00:29:14.723267 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5" May 8 00:29:14.723631 containerd[2726]: time="2025-05-08T00:29:14.723609979Z" level=info msg="StopPodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\"" May 8 00:29:14.723801 containerd[2726]: time="2025-05-08T00:29:14.723758468Z" level=info msg="Ensure that sandbox 61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5 in task-service has been cleanup successfully" May 8 00:29:14.723978 containerd[2726]: time="2025-05-08T00:29:14.723962200Z" level=info msg="TearDown network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" successfully" May 8 00:29:14.724008 containerd[2726]: time="2025-05-08T00:29:14.723977200Z" level=info msg="StopPodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" returns successfully" May 8 00:29:14.724202 containerd[2726]: time="2025-05-08T00:29:14.724181052Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" May 8 00:29:14.724286 containerd[2726]: time="2025-05-08T00:29:14.724265417Z" level=info msg="TearDown network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" successfully" May 8 00:29:14.724316 containerd[2726]: time="2025-05-08T00:29:14.724286139Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" returns successfully" May 8 00:29:14.724349 kubelet[4251]: I0508 00:29:14.724332 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544" May 8 00:29:14.724629 containerd[2726]: time="2025-05-08T00:29:14.724605477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:2,}" May 8 00:29:14.724672 containerd[2726]: time="2025-05-08T00:29:14.724656240Z" level=info msg="StopPodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\"" May 8 00:29:14.724802 containerd[2726]: time="2025-05-08T00:29:14.724786528Z" level=info msg="Ensure that sandbox 20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544 in task-service has been cleanup successfully" May 8 00:29:14.724979 containerd[2726]: time="2025-05-08T00:29:14.724961298Z" level=info msg="TearDown network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" successfully" May 8 00:29:14.725001 containerd[2726]: time="2025-05-08T00:29:14.724976419Z" level=info msg="StopPodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" returns successfully" May 8 00:29:14.725235 containerd[2726]: time="2025-05-08T00:29:14.725220713Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" May 8 00:29:14.725285 kubelet[4251]: I0508 00:29:14.725266 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e" May 8 00:29:14.725317 containerd[2726]: time="2025-05-08T00:29:14.725286717Z" level=info msg="TearDown network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" successfully" May 8 00:29:14.725317 containerd[2726]: time="2025-05-08T00:29:14.725296158Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" returns successfully" May 8 00:29:14.725596 containerd[2726]: time="2025-05-08T00:29:14.725579615Z" level=info msg="StopPodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\"" May 8 00:29:14.725630 containerd[2726]: time="2025-05-08T00:29:14.725609136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:2,}" May 8 00:29:14.725716 containerd[2726]: time="2025-05-08T00:29:14.725703902Z" level=info msg="Ensure that sandbox ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e in task-service has been cleanup successfully" May 8 00:29:14.725950 containerd[2726]: time="2025-05-08T00:29:14.725910674Z" level=info msg="TearDown network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" successfully" May 8 00:29:14.725950 containerd[2726]: time="2025-05-08T00:29:14.725925595Z" level=info msg="StopPodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" returns successfully" May 8 00:29:14.726281 containerd[2726]: time="2025-05-08T00:29:14.726262935Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" May 8 00:29:14.726354 kubelet[4251]: I0508 00:29:14.726342 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7" May 8 00:29:14.726389 containerd[2726]: time="2025-05-08T00:29:14.726345139Z" level=info msg="TearDown network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" successfully" May 8 00:29:14.726389 containerd[2726]: time="2025-05-08T00:29:14.726356380Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" returns successfully" May 8 00:29:14.726739 containerd[2726]: time="2025-05-08T00:29:14.726720681Z" level=info msg="StopPodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\"" May 8 00:29:14.726769 containerd[2726]: time="2025-05-08T00:29:14.726747203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:2,}" May 8 00:29:14.726874 containerd[2726]: time="2025-05-08T00:29:14.726859290Z" level=info msg="Ensure that sandbox 1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7 in task-service has been cleanup successfully" May 8 00:29:14.727048 containerd[2726]: time="2025-05-08T00:29:14.727033420Z" level=info msg="TearDown network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" successfully" May 8 00:29:14.727083 containerd[2726]: time="2025-05-08T00:29:14.727047701Z" level=info msg="StopPodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" returns successfully" May 8 00:29:14.727322 containerd[2726]: time="2025-05-08T00:29:14.727303516Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" May 8 00:29:14.727434 containerd[2726]: time="2025-05-08T00:29:14.727399441Z" level=info msg="TearDown network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" successfully" May 8 00:29:14.727434 containerd[2726]: time="2025-05-08T00:29:14.727411282Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" returns successfully" May 8 00:29:14.727839 containerd[2726]: time="2025-05-08T00:29:14.727814546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:2,}" May 8 00:29:14.765449 containerd[2726]: time="2025-05-08T00:29:14.765395592Z" level=error msg="Failed to destroy network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.765769 containerd[2726]: time="2025-05-08T00:29:14.765747492Z" level=error msg="encountered an error cleaning up failed sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.765824 containerd[2726]: time="2025-05-08T00:29:14.765808536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.766032 kubelet[4251]: E0508 00:29:14.766000 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.766304 kubelet[4251]: E0508 00:29:14.766080 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:14.766304 kubelet[4251]: E0508 00:29:14.766105 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" May 8 00:29:14.766304 kubelet[4251]: E0508 00:29:14.766148 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd564c9db-47dls_calico-apiserver(2f17d894-ad9b-4601-911e-d2ddf3fb666b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd564c9db-47dls_calico-apiserver(2f17d894-ad9b-4601-911e-d2ddf3fb666b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" podUID="2f17d894-ad9b-4601-911e-d2ddf3fb666b" May 8 00:29:14.767388 containerd[2726]: time="2025-05-08T00:29:14.767356187Z" level=error msg="Failed to destroy network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.767741 containerd[2726]: time="2025-05-08T00:29:14.767713448Z" level=error msg="encountered an error cleaning up failed sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.767786 containerd[2726]: time="2025-05-08T00:29:14.767771051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.768072 kubelet[4251]: E0508 00:29:14.768039 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.768107 kubelet[4251]: E0508 00:29:14.768093 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:14.768129 kubelet[4251]: E0508 00:29:14.768112 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-524hx" May 8 00:29:14.768167 kubelet[4251]: E0508 00:29:14.768146 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-524hx_kube-system(8583ab32-415e-4ed6-826b-5de624d0cf2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-524hx_kube-system(8583ab32-415e-4ed6-826b-5de624d0cf2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-524hx" podUID="8583ab32-415e-4ed6-826b-5de624d0cf2d" May 8 00:29:14.771269 containerd[2726]: time="2025-05-08T00:29:14.771237335Z" level=error msg="Failed to destroy network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771380 containerd[2726]: time="2025-05-08T00:29:14.771353341Z" level=error msg="Failed to destroy network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771648 containerd[2726]: time="2025-05-08T00:29:14.771626637Z" level=error msg="encountered an error cleaning up failed sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771692 containerd[2726]: time="2025-05-08T00:29:14.771676880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771730 containerd[2726]: time="2025-05-08T00:29:14.771707562Z" level=error msg="encountered an error cleaning up failed sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771770 containerd[2726]: time="2025-05-08T00:29:14.771753125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771819 kubelet[4251]: E0508 00:29:14.771798 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771853 kubelet[4251]: E0508 00:29:14.771835 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:14.771883 kubelet[4251]: E0508 00:29:14.771856 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" May 8 00:29:14.771908 kubelet[4251]: E0508 00:29:14.771878 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.771938 kubelet[4251]: E0508 00:29:14.771925 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:14.771961 kubelet[4251]: E0508 00:29:14.771942 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" May 8 00:29:14.771990 kubelet[4251]: E0508 00:29:14.771972 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd9f6d6cd-fck25_calico-system(d3782ab7-8e17-40bb-8afe-070d834a6209)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd9f6d6cd-fck25_calico-system(d3782ab7-8e17-40bb-8afe-070d834a6209)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" podUID="d3782ab7-8e17-40bb-8afe-070d834a6209" May 8 00:29:14.772027 kubelet[4251]: E0508 00:29:14.771887 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd564c9db-fvbbx_calico-apiserver(8d49af69-296d-4b24-a562-fc4412202a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd564c9db-fvbbx_calico-apiserver(8d49af69-296d-4b24-a562-fc4412202a83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" podUID="8d49af69-296d-4b24-a562-fc4412202a83" May 8 00:29:14.772154 containerd[2726]: time="2025-05-08T00:29:14.772128307Z" level=error msg="Failed to destroy network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.772445 containerd[2726]: time="2025-05-08T00:29:14.772425764Z" level=error msg="encountered an error cleaning up failed sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.772484 containerd[2726]: time="2025-05-08T00:29:14.772469007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.772591 kubelet[4251]: E0508 00:29:14.772569 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.772621 kubelet[4251]: E0508 00:29:14.772607 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:14.772643 kubelet[4251]: E0508 00:29:14.772624 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j49s4" May 8 00:29:14.772671 kubelet[4251]: E0508 00:29:14.772654 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j49s4_kube-system(ae4b5035-3b5c-43b7-9e5c-b67545c7be07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j49s4_kube-system(ae4b5035-3b5c-43b7-9e5c-b67545c7be07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j49s4" podUID="ae4b5035-3b5c-43b7-9e5c-b67545c7be07" May 8 00:29:14.773229 containerd[2726]: time="2025-05-08T00:29:14.773203930Z" level=error msg="Failed to destroy network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.773506 containerd[2726]: time="2025-05-08T00:29:14.773486027Z" level=error msg="encountered an error cleaning up failed sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.773544 containerd[2726]: time="2025-05-08T00:29:14.773529069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.773682 kubelet[4251]: E0508 00:29:14.773656 4251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:29:14.773707 kubelet[4251]: E0508 00:29:14.773696 4251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qllhv" May 8 00:29:14.773731 kubelet[4251]: E0508 00:29:14.773713 4251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qllhv" May 8 00:29:14.773762 kubelet[4251]: E0508 00:29:14.773743 4251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qllhv_calico-system(e48e3e91-8128-422e-a541-9325080112a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qllhv_calico-system(e48e3e91-8128-422e-a541-9325080112a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qllhv" podUID="e48e3e91-8128-422e-a541-9325080112a1" May 8 00:29:14.814320 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:29:14.814379 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:29:15.153290 systemd[1]: run-netns-cni\x2d80319a6c\x2d1e81\x2d365c\x2d4367\x2dcdd197e2a8ff.mount: Deactivated successfully. May 8 00:29:15.153371 systemd[1]: run-netns-cni\x2d2aeef675\x2dfc36\x2d18a6\x2d37e5\x2dacadb8d4f556.mount: Deactivated successfully. May 8 00:29:15.153418 systemd[1]: run-netns-cni\x2db4d4e1f4\x2d9910\x2dad11\x2daf57\x2d5bd6e8658851.mount: Deactivated successfully. May 8 00:29:15.153462 systemd[1]: run-netns-cni\x2d252bd965\x2d4bf6\x2ddd15\x2defbd\x2d0de05d149394.mount: Deactivated successfully. May 8 00:29:15.153506 systemd[1]: run-netns-cni\x2d5af176ca\x2d8082\x2d5e3e\x2d2d12\x2d136d960ed22e.mount: Deactivated successfully. May 8 00:29:15.153550 systemd[1]: run-netns-cni\x2dc3cf4fa1\x2d7880\x2d5a67\x2dc9ce\x2dbb0835b62175.mount: Deactivated successfully. May 8 00:29:15.728750 kubelet[4251]: I0508 00:29:15.728722 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d" May 8 00:29:15.729161 containerd[2726]: time="2025-05-08T00:29:15.729132391Z" level=info msg="StopPodSandbox for \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\"" May 8 00:29:15.729349 containerd[2726]: time="2025-05-08T00:29:15.729300040Z" level=info msg="Ensure that sandbox 2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d in task-service has been cleanup successfully" May 8 00:29:15.729485 containerd[2726]: time="2025-05-08T00:29:15.729469890Z" level=info msg="TearDown network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\" successfully" May 8 00:29:15.729510 containerd[2726]: time="2025-05-08T00:29:15.729483411Z" level=info msg="StopPodSandbox for \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\" returns successfully" May 8 00:29:15.730082 containerd[2726]: time="2025-05-08T00:29:15.730068483Z" level=info msg="StopPodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\"" May 8 00:29:15.730149 containerd[2726]: time="2025-05-08T00:29:15.730138767Z" level=info msg="TearDown network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" successfully" May 8 00:29:15.730169 containerd[2726]: time="2025-05-08T00:29:15.730149208Z" level=info msg="StopPodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" returns successfully" May 8 00:29:15.730350 containerd[2726]: time="2025-05-08T00:29:15.730330618Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" May 8 00:29:15.730419 containerd[2726]: time="2025-05-08T00:29:15.730408262Z" level=info msg="TearDown network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" successfully" May 8 00:29:15.730446 containerd[2726]: time="2025-05-08T00:29:15.730419263Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" returns successfully" May 8 00:29:15.730516 kubelet[4251]: I0508 00:29:15.730502 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6" May 8 00:29:15.730768 containerd[2726]: time="2025-05-08T00:29:15.730749801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:3,}" May 8 00:29:15.730850 containerd[2726]: time="2025-05-08T00:29:15.730831766Z" level=info msg="StopPodSandbox for \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\"" May 8 00:29:15.730960 systemd[1]: run-netns-cni\x2d14a44934\x2dd6ce\x2d1a05\x2d6eb9\x2d469890fe55f8.mount: Deactivated successfully. May 8 00:29:15.731114 containerd[2726]: time="2025-05-08T00:29:15.731096261Z" level=info msg="Ensure that sandbox 34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6 in task-service has been cleanup successfully" May 8 00:29:15.731290 containerd[2726]: time="2025-05-08T00:29:15.731274751Z" level=info msg="TearDown network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\" successfully" May 8 00:29:15.731315 containerd[2726]: time="2025-05-08T00:29:15.731289911Z" level=info msg="StopPodSandbox for \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\" returns successfully" May 8 00:29:15.731535 containerd[2726]: time="2025-05-08T00:29:15.731514484Z" level=info msg="StopPodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\"" May 8 00:29:15.731630 containerd[2726]: time="2025-05-08T00:29:15.731617170Z" level=info msg="TearDown network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" successfully" May 8 00:29:15.731652 containerd[2726]: time="2025-05-08T00:29:15.731630010Z" level=info msg="StopPodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" returns successfully" May 8 00:29:15.731728 kubelet[4251]: I0508 00:29:15.731715 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13" May 8 00:29:15.731822 containerd[2726]: time="2025-05-08T00:29:15.731806740Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" May 8 00:29:15.731882 containerd[2726]: time="2025-05-08T00:29:15.731871584Z" level=info msg="TearDown network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" successfully" May 8 00:29:15.731902 containerd[2726]: time="2025-05-08T00:29:15.731881704Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" returns successfully" May 8 00:29:15.732054 containerd[2726]: time="2025-05-08T00:29:15.732037953Z" level=info msg="StopPodSandbox for \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\"" May 8 00:29:15.732173 containerd[2726]: time="2025-05-08T00:29:15.732161600Z" level=info msg="Ensure that sandbox 21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13 in task-service has been cleanup successfully" May 8 00:29:15.732285 containerd[2726]: time="2025-05-08T00:29:15.732269126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:3,}" May 8 00:29:15.732475 containerd[2726]: time="2025-05-08T00:29:15.732463017Z" level=info msg="TearDown network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\" successfully" May 8 00:29:15.732495 containerd[2726]: time="2025-05-08T00:29:15.732475098Z" level=info msg="StopPodSandbox for \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\" returns successfully" May 8 00:29:15.732768 containerd[2726]: time="2025-05-08T00:29:15.732636947Z" level=info msg="StopPodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\"" May 8 00:29:15.732768 containerd[2726]: time="2025-05-08T00:29:15.732704830Z" level=info msg="TearDown network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" successfully" May 8 00:29:15.732768 containerd[2726]: time="2025-05-08T00:29:15.732713231Z" level=info msg="StopPodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" returns successfully" May 8 00:29:15.732893 systemd[1]: run-netns-cni\x2df7430787\x2d41f7\x2dad01\x2d314a\x2d47cb9268a805.mount: Deactivated successfully. May 8 00:29:15.732951 containerd[2726]: time="2025-05-08T00:29:15.732890201Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" May 8 00:29:15.732975 containerd[2726]: time="2025-05-08T00:29:15.732960845Z" level=info msg="TearDown network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" successfully" May 8 00:29:15.732975 containerd[2726]: time="2025-05-08T00:29:15.732970365Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" returns successfully" May 8 00:29:15.733088 kubelet[4251]: I0508 00:29:15.733073 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3" May 8 00:29:15.733294 containerd[2726]: time="2025-05-08T00:29:15.733276182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:3,}" May 8 00:29:15.733511 containerd[2726]: time="2025-05-08T00:29:15.733494474Z" level=info msg="StopPodSandbox for \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\"" May 8 00:29:15.733636 containerd[2726]: time="2025-05-08T00:29:15.733625042Z" level=info msg="Ensure that sandbox 5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3 in task-service has been cleanup successfully" May 8 00:29:15.733800 containerd[2726]: time="2025-05-08T00:29:15.733788091Z" level=info msg="TearDown network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\" successfully" May 8 00:29:15.733824 containerd[2726]: time="2025-05-08T00:29:15.733800211Z" level=info msg="StopPodSandbox for \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\" returns successfully" May 8 00:29:15.734029 containerd[2726]: time="2025-05-08T00:29:15.734011503Z" level=info msg="StopPodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\"" May 8 00:29:15.734102 containerd[2726]: time="2025-05-08T00:29:15.734091348Z" level=info msg="TearDown network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" successfully" May 8 00:29:15.734126 containerd[2726]: time="2025-05-08T00:29:15.734102548Z" level=info msg="StopPodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" returns successfully" May 8 00:29:15.734314 containerd[2726]: time="2025-05-08T00:29:15.734300519Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" May 8 00:29:15.734376 containerd[2726]: time="2025-05-08T00:29:15.734366403Z" level=info msg="TearDown network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" successfully" May 8 00:29:15.734396 containerd[2726]: time="2025-05-08T00:29:15.734376524Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" returns successfully" May 8 00:29:15.734717 containerd[2726]: time="2025-05-08T00:29:15.734695221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:3,}" May 8 00:29:15.734777 systemd[1]: run-netns-cni\x2d61d6d371\x2d3633\x2db0d9\x2d254d\x2d6ec7ebed9926.mount: Deactivated successfully. May 8 00:29:15.734852 systemd[1]: run-netns-cni\x2d90ee592e\x2de9ff\x2d17cc\x2d6a0c\x2d2d5ed628584b.mount: Deactivated successfully. May 8 00:29:15.735917 kubelet[4251]: I0508 00:29:15.735902 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725" May 8 00:29:15.736279 containerd[2726]: time="2025-05-08T00:29:15.736260469Z" level=info msg="StopPodSandbox for \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\"" May 8 00:29:15.736398 containerd[2726]: time="2025-05-08T00:29:15.736386516Z" level=info msg="Ensure that sandbox 796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725 in task-service has been cleanup successfully" May 8 00:29:15.736567 containerd[2726]: time="2025-05-08T00:29:15.736545845Z" level=info msg="TearDown network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\" successfully" May 8 00:29:15.736567 containerd[2726]: time="2025-05-08T00:29:15.736558605Z" level=info msg="StopPodSandbox for \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\" returns successfully" May 8 00:29:15.736845 containerd[2726]: time="2025-05-08T00:29:15.736826060Z" level=info msg="StopPodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\"" May 8 00:29:15.736923 containerd[2726]: time="2025-05-08T00:29:15.736910825Z" level=info msg="TearDown network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" successfully" May 8 00:29:15.736945 containerd[2726]: time="2025-05-08T00:29:15.736924026Z" level=info msg="StopPodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" returns successfully" May 8 00:29:15.737169 containerd[2726]: time="2025-05-08T00:29:15.737151198Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" May 8 00:29:15.737242 containerd[2726]: time="2025-05-08T00:29:15.737229963Z" level=info msg="TearDown network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" successfully" May 8 00:29:15.737264 containerd[2726]: time="2025-05-08T00:29:15.737242363Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" returns successfully" May 8 00:29:15.737372 kubelet[4251]: I0508 00:29:15.737358 4251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6" May 8 00:29:15.737538 containerd[2726]: time="2025-05-08T00:29:15.737519899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:3,}" May 8 00:29:15.737739 containerd[2726]: time="2025-05-08T00:29:15.737720110Z" level=info msg="StopPodSandbox for \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\"" May 8 00:29:15.737889 containerd[2726]: time="2025-05-08T00:29:15.737876199Z" level=info msg="Ensure that sandbox 812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6 in task-service has been cleanup successfully" May 8 00:29:15.738038 containerd[2726]: time="2025-05-08T00:29:15.738024287Z" level=info msg="TearDown network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\" successfully" May 8 00:29:15.738065 containerd[2726]: time="2025-05-08T00:29:15.738038008Z" level=info msg="StopPodSandbox for \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\" returns successfully" May 8 00:29:15.738253 containerd[2726]: time="2025-05-08T00:29:15.738237659Z" level=info msg="StopPodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\"" May 8 00:29:15.738319 containerd[2726]: time="2025-05-08T00:29:15.738308343Z" level=info msg="TearDown network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" successfully" May 8 00:29:15.738340 containerd[2726]: time="2025-05-08T00:29:15.738320024Z" level=info msg="StopPodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" returns successfully" May 8 00:29:15.739037 containerd[2726]: time="2025-05-08T00:29:15.739017262Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" May 8 00:29:15.739116 containerd[2726]: time="2025-05-08T00:29:15.739105987Z" level=info msg="TearDown network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" successfully" May 8 00:29:15.739138 containerd[2726]: time="2025-05-08T00:29:15.739116428Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" returns successfully" May 8 00:29:15.739479 containerd[2726]: time="2025-05-08T00:29:15.739460447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:3,}" May 8 00:29:15.745552 kubelet[4251]: I0508 00:29:15.745501 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ncdtd" podStartSLOduration=1.49702226 podStartE2EDuration="6.745482143s" podCreationTimestamp="2025-05-08 00:29:09 +0000 UTC" firstStartedPulling="2025-05-08 00:29:09.393535585 +0000 UTC m=+15.801304137" lastFinishedPulling="2025-05-08 00:29:14.641995428 +0000 UTC m=+21.049764020" observedRunningTime="2025-05-08 00:29:15.745096121 +0000 UTC m=+22.152864713" watchObservedRunningTime="2025-05-08 00:29:15.745482143 +0000 UTC m=+22.153250735" May 8 00:29:15.842005 systemd-networkd[2631]: cali6e6ce15483c: Link UP May 8 00:29:15.842157 systemd-networkd[2631]: cali6e6ce15483c: Gained carrier May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.755 [INFO][6630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0 calico-apiserver-5bd564c9db- calico-apiserver 2f17d894-ad9b-4601-911e-d2ddf3fb666b 647 0 2025-05-08 00:29:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bd564c9db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-e3459bc746 calico-apiserver-5bd564c9db-47dls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e6ce15483c [] []}} ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6630] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.807 [INFO][6804] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" HandleID="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6804] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" HandleID="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400041e3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-e3459bc746", "pod":"calico-apiserver-5bd564c9db-47dls", "timestamp":"2025-05-08 00:29:15.807985349 +0000 UTC"}, Hostname:"ci-4230.1.1-n-e3459bc746", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6804] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-e3459bc746' May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.819 [INFO][6804] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.822 [INFO][6804] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.825 [INFO][6804] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.826 [INFO][6804] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.828 [INFO][6804] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.828 [INFO][6804] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.829 [INFO][6804] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.831 [INFO][6804] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.835 [INFO][6804] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.193/26] block=192.168.18.192/26 handle="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.835 [INFO][6804] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.193/26] handle="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.835 [INFO][6804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:29:15.848895 containerd[2726]: 2025-05-08 00:29:15.835 [INFO][6804] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.193/26] IPv6=[] ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" HandleID="k8s-pod-network.ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.849340 containerd[2726]: 2025-05-08 00:29:15.836 [INFO][6630] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0", GenerateName:"calico-apiserver-5bd564c9db-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f17d894-ad9b-4601-911e-d2ddf3fb666b", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd564c9db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"", Pod:"calico-apiserver-5bd564c9db-47dls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e6ce15483c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:15.849340 containerd[2726]: 2025-05-08 00:29:15.837 [INFO][6630] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.193/32] ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.849340 containerd[2726]: 2025-05-08 00:29:15.837 [INFO][6630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e6ce15483c ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.849340 containerd[2726]: 2025-05-08 00:29:15.842 [INFO][6630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.849340 containerd[2726]: 2025-05-08 00:29:15.842 [INFO][6630] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0", GenerateName:"calico-apiserver-5bd564c9db-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f17d894-ad9b-4601-911e-d2ddf3fb666b", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd564c9db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab", Pod:"calico-apiserver-5bd564c9db-47dls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e6ce15483c", MAC:"da:0d:0b:c4:11:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:15.849340 containerd[2726]: 2025-05-08 00:29:15.847 [INFO][6630] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-47dls" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--47dls-eth0" May 8 00:29:15.862563 containerd[2726]: time="2025-05-08T00:29:15.862508150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:15.862590 containerd[2726]: time="2025-05-08T00:29:15.862555353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:15.862912 containerd[2726]: time="2025-05-08T00:29:15.862885371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:15.862998 containerd[2726]: time="2025-05-08T00:29:15.862982736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:15.881165 systemd[1]: Started cri-containerd-ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab.scope - libcontainer container ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab. May 8 00:29:15.904045 containerd[2726]: time="2025-05-08T00:29:15.903977623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-47dls,Uid:2f17d894-ad9b-4601-911e-d2ddf3fb666b,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab\"" May 8 00:29:15.905036 containerd[2726]: time="2025-05-08T00:29:15.905019761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:29:15.947413 systemd-networkd[2631]: cali46cf1dad90c: Link UP May 8 00:29:15.947536 systemd-networkd[2631]: cali46cf1dad90c: Gained carrier May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.755 [INFO][6622] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6622] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0 coredns-668d6bf9bc- kube-system 8583ab32-415e-4ed6-826b-5de624d0cf2d 641 0 2025-05-08 00:29:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-e3459bc746 coredns-668d6bf9bc-524hx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46cf1dad90c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6622] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.807 [INFO][6800] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" HandleID="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Workload="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6800] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" HandleID="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Workload="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000781490), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-e3459bc746", "pod":"coredns-668d6bf9bc-524hx", "timestamp":"2025-05-08 00:29:15.807984309 +0000 UTC"}, Hostname:"ci-4230.1.1-n-e3459bc746", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.835 [INFO][6800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.835 [INFO][6800] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-e3459bc746' May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.919 [INFO][6800] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.922 [INFO][6800] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.926 [INFO][6800] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.928 [INFO][6800] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.933 [INFO][6800] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.933 [INFO][6800] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.936 [INFO][6800] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928 May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.940 [INFO][6800] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.945 [INFO][6800] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.194/26] block=192.168.18.192/26 handle="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.945 [INFO][6800] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.194/26] handle="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.945 [INFO][6800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:29:15.954163 containerd[2726]: 2025-05-08 00:29:15.945 [INFO][6800] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.194/26] IPv6=[] ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" HandleID="k8s-pod-network.8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Workload="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.954623 containerd[2726]: 2025-05-08 00:29:15.946 [INFO][6622] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8583ab32-415e-4ed6-826b-5de624d0cf2d", ResourceVersion:"641", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"", Pod:"coredns-668d6bf9bc-524hx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46cf1dad90c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:15.954623 containerd[2726]: 2025-05-08 00:29:15.946 [INFO][6622] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.194/32] ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.954623 containerd[2726]: 2025-05-08 00:29:15.946 [INFO][6622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46cf1dad90c ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.954623 containerd[2726]: 2025-05-08 00:29:15.947 [INFO][6622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.954623 containerd[2726]: 2025-05-08 00:29:15.947 [INFO][6622] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8583ab32-415e-4ed6-826b-5de624d0cf2d", ResourceVersion:"641", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928", Pod:"coredns-668d6bf9bc-524hx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46cf1dad90c", MAC:"e6:76:e9:21:cc:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:15.954623 containerd[2726]: 2025-05-08 00:29:15.953 [INFO][6622] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928" Namespace="kube-system" Pod="coredns-668d6bf9bc-524hx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--524hx-eth0" May 8 00:29:15.967942 containerd[2726]: time="2025-05-08T00:29:15.967885547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:15.967942 containerd[2726]: time="2025-05-08T00:29:15.967931270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:15.967991 containerd[2726]: time="2025-05-08T00:29:15.967945150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:15.968028 containerd[2726]: time="2025-05-08T00:29:15.968013594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:15.998239 systemd[1]: Started cri-containerd-8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928.scope - libcontainer container 8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928. May 8 00:29:16.021670 containerd[2726]: time="2025-05-08T00:29:16.021644489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-524hx,Uid:8583ab32-415e-4ed6-826b-5de624d0cf2d,Namespace:kube-system,Attempt:3,} returns sandbox id \"8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928\"" May 8 00:29:16.023633 containerd[2726]: time="2025-05-08T00:29:16.023609993Z" level=info msg="CreateContainer within sandbox \"8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:29:16.028896 containerd[2726]: time="2025-05-08T00:29:16.028868232Z" level=info msg="CreateContainer within sandbox \"8867f2616952931acf8b79c7efb08a02a697a969427ebd100ab95ea2ec7b5928\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41188ed81f97f5bfa69986c7e54d3502cd7583d2afa8785a2ae6502770a9ede1\"" May 8 00:29:16.029221 containerd[2726]: time="2025-05-08T00:29:16.029204450Z" level=info msg="StartContainer for \"41188ed81f97f5bfa69986c7e54d3502cd7583d2afa8785a2ae6502770a9ede1\"" May 8 00:29:16.054248 systemd-networkd[2631]: cali184ca88ce3f: Link UP May 8 00:29:16.054377 systemd-networkd[2631]: cali184ca88ce3f: Gained carrier May 8 00:29:16.059197 systemd[1]: Started cri-containerd-41188ed81f97f5bfa69986c7e54d3502cd7583d2afa8785a2ae6502770a9ede1.scope - libcontainer container 41188ed81f97f5bfa69986c7e54d3502cd7583d2afa8785a2ae6502770a9ede1. May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.759 [INFO][6673] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0 calico-apiserver-5bd564c9db- calico-apiserver 8d49af69-296d-4b24-a562-fc4412202a83 645 0 2025-05-08 00:29:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bd564c9db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-e3459bc746 calico-apiserver-5bd564c9db-fvbbx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali184ca88ce3f [] []}} ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6673] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.807 [INFO][6797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" HandleID="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" HandleID="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000412890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-e3459bc746", "pod":"calico-apiserver-5bd564c9db-fvbbx", "timestamp":"2025-05-08 00:29:15.807983589 +0000 UTC"}, Hostname:"ci-4230.1.1-n-e3459bc746", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.945 [INFO][6797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:15.945 [INFO][6797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-e3459bc746' May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.020 [INFO][6797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.023 [INFO][6797] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.040 [INFO][6797] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.041 [INFO][6797] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.043 [INFO][6797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.043 [INFO][6797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.044 [INFO][6797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677 May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.047 [INFO][6797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.051 [INFO][6797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.195/26] block=192.168.18.192/26 handle="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.051 [INFO][6797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.195/26] handle="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.051 [INFO][6797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:29:16.062279 containerd[2726]: 2025-05-08 00:29:16.051 [INFO][6797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.195/26] IPv6=[] ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" HandleID="k8s-pod-network.e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.062807 containerd[2726]: 2025-05-08 00:29:16.052 [INFO][6673] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0", GenerateName:"calico-apiserver-5bd564c9db-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d49af69-296d-4b24-a562-fc4412202a83", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd564c9db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"", Pod:"calico-apiserver-5bd564c9db-fvbbx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali184ca88ce3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.062807 containerd[2726]: 2025-05-08 00:29:16.053 [INFO][6673] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.195/32] ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.062807 containerd[2726]: 2025-05-08 00:29:16.053 [INFO][6673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali184ca88ce3f ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.062807 containerd[2726]: 2025-05-08 00:29:16.054 [INFO][6673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.062807 containerd[2726]: 2025-05-08 00:29:16.054 [INFO][6673] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0", GenerateName:"calico-apiserver-5bd564c9db-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d49af69-296d-4b24-a562-fc4412202a83", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd564c9db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677", Pod:"calico-apiserver-5bd564c9db-fvbbx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali184ca88ce3f", MAC:"e2:0b:8f:d0:b0:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.062807 containerd[2726]: 2025-05-08 00:29:16.060 [INFO][6673] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677" Namespace="calico-apiserver" Pod="calico-apiserver-5bd564c9db-fvbbx" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--apiserver--5bd564c9db--fvbbx-eth0" May 8 00:29:16.076876 containerd[2726]: time="2025-05-08T00:29:16.076848536Z" level=info msg="StartContainer for \"41188ed81f97f5bfa69986c7e54d3502cd7583d2afa8785a2ae6502770a9ede1\" returns successfully" May 8 00:29:16.077605 containerd[2726]: time="2025-05-08T00:29:16.077549134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:16.077646 containerd[2726]: time="2025-05-08T00:29:16.077601976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:16.077646 containerd[2726]: time="2025-05-08T00:29:16.077614137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.077718 containerd[2726]: time="2025-05-08T00:29:16.077685901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.104191 systemd[1]: Started cri-containerd-e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677.scope - libcontainer container e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677. May 8 00:29:16.128016 containerd[2726]: time="2025-05-08T00:29:16.127987728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd564c9db-fvbbx,Uid:8d49af69-296d-4b24-a562-fc4412202a83,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677\"" May 8 00:29:16.155016 systemd-networkd[2631]: cali433cbc3cf31: Link UP May 8 00:29:16.155175 systemd-networkd[2631]: cali433cbc3cf31: Gained carrier May 8 00:29:16.158360 systemd[1]: run-netns-cni\x2d352343a5\x2dc5f2\x2d783c\x2d537b\x2dc4596c2a5705.mount: Deactivated successfully. May 8 00:29:16.158440 systemd[1]: run-netns-cni\x2d2a39ec27\x2db55a\x2d8f8d\x2d443f\x2d487860619d7f.mount: Deactivated successfully. May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:15.756 [INFO][6649] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0 coredns-668d6bf9bc- kube-system ae4b5035-3b5c-43b7-9e5c-b67545c7be07 643 0 2025-05-08 00:29:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-e3459bc746 coredns-668d6bf9bc-j49s4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali433cbc3cf31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6649] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:15.807 [INFO][6798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" HandleID="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Workload="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" HandleID="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Workload="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-e3459bc746", "pod":"coredns-668d6bf9bc-j49s4", "timestamp":"2025-05-08 00:29:15.807983109 +0000 UTC"}, Hostname:"ci-4230.1.1-n-e3459bc746", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.051 [INFO][6798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.051 [INFO][6798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-e3459bc746' May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.120 [INFO][6798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.123 [INFO][6798] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.141 [INFO][6798] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.142 [INFO][6798] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.144 [INFO][6798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.144 [INFO][6798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.145 [INFO][6798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9 May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.148 [INFO][6798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.152 [INFO][6798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.196/26] block=192.168.18.192/26 handle="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.152 [INFO][6798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.196/26] handle="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.152 [INFO][6798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:29:16.173957 containerd[2726]: 2025-05-08 00:29:16.152 [INFO][6798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.196/26] IPv6=[] ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" HandleID="k8s-pod-network.f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Workload="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.174431 containerd[2726]: 2025-05-08 00:29:16.153 [INFO][6649] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ae4b5035-3b5c-43b7-9e5c-b67545c7be07", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"", Pod:"coredns-668d6bf9bc-j49s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali433cbc3cf31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.174431 containerd[2726]: 2025-05-08 00:29:16.153 [INFO][6649] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.196/32] ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.174431 containerd[2726]: 2025-05-08 00:29:16.154 [INFO][6649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali433cbc3cf31 ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.174431 containerd[2726]: 2025-05-08 00:29:16.155 [INFO][6649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.174431 containerd[2726]: 2025-05-08 00:29:16.155 [INFO][6649] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ae4b5035-3b5c-43b7-9e5c-b67545c7be07", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9", Pod:"coredns-668d6bf9bc-j49s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali433cbc3cf31", MAC:"4a:5e:54:36:dd:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.174431 containerd[2726]: 2025-05-08 00:29:16.172 [INFO][6649] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j49s4" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-coredns--668d6bf9bc--j49s4-eth0" May 8 00:29:16.192012 containerd[2726]: time="2025-05-08T00:29:16.191406371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:16.192012 containerd[2726]: time="2025-05-08T00:29:16.191456174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:16.192012 containerd[2726]: time="2025-05-08T00:29:16.191471255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.192012 containerd[2726]: time="2025-05-08T00:29:16.191541179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.217267 systemd[1]: Started cri-containerd-f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9.scope - libcontainer container f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9. May 8 00:29:16.240749 containerd[2726]: time="2025-05-08T00:29:16.240720427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j49s4,Uid:ae4b5035-3b5c-43b7-9e5c-b67545c7be07,Namespace:kube-system,Attempt:3,} returns sandbox id \"f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9\"" May 8 00:29:16.242638 containerd[2726]: time="2025-05-08T00:29:16.242614447Z" level=info msg="CreateContainer within sandbox \"f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:29:16.254569 systemd-networkd[2631]: cali96449c62485: Link UP May 8 00:29:16.254696 systemd-networkd[2631]: cali96449c62485: Gained carrier May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:15.760 [INFO][6692] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0 calico-kube-controllers-5bd9f6d6cd- calico-system d3782ab7-8e17-40bb-8afe-070d834a6209 644 0 2025-05-08 00:29:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bd9f6d6cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230.1.1-n-e3459bc746 calico-kube-controllers-5bd9f6d6cd-fck25 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali96449c62485 [] []}} ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6692] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:15.807 [INFO][6799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" HandleID="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" HandleID="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000988a70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-e3459bc746", "pod":"calico-kube-controllers-5bd9f6d6cd-fck25", "timestamp":"2025-05-08 00:29:15.807986589 +0000 UTC"}, Hostname:"ci-4230.1.1-n-e3459bc746", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.152 [INFO][6799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.152 [INFO][6799] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-e3459bc746' May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.220 [INFO][6799] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.224 [INFO][6799] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.241 [INFO][6799] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.243 [INFO][6799] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.245 [INFO][6799] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.245 [INFO][6799] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.246 [INFO][6799] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613 May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.248 [INFO][6799] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.252 [INFO][6799] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.197/26] block=192.168.18.192/26 handle="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.252 [INFO][6799] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.197/26] handle="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.252 [INFO][6799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:29:16.261233 containerd[2726]: 2025-05-08 00:29:16.252 [INFO][6799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.197/26] IPv6=[] ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" HandleID="k8s-pod-network.bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Workload="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.261698 containerd[2726]: 2025-05-08 00:29:16.253 [INFO][6692] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0", GenerateName:"calico-kube-controllers-5bd9f6d6cd-", Namespace:"calico-system", SelfLink:"", UID:"d3782ab7-8e17-40bb-8afe-070d834a6209", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd9f6d6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"", Pod:"calico-kube-controllers-5bd9f6d6cd-fck25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96449c62485", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.261698 containerd[2726]: 2025-05-08 00:29:16.253 [INFO][6692] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.197/32] ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.261698 containerd[2726]: 2025-05-08 00:29:16.253 [INFO][6692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96449c62485 ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.261698 containerd[2726]: 2025-05-08 00:29:16.254 [INFO][6692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.261698 containerd[2726]: 2025-05-08 00:29:16.254 [INFO][6692] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0", GenerateName:"calico-kube-controllers-5bd9f6d6cd-", Namespace:"calico-system", SelfLink:"", UID:"d3782ab7-8e17-40bb-8afe-070d834a6209", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd9f6d6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613", Pod:"calico-kube-controllers-5bd9f6d6cd-fck25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96449c62485", MAC:"d2:ce:da:2b:24:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.261698 containerd[2726]: 2025-05-08 00:29:16.260 [INFO][6692] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613" Namespace="calico-system" Pod="calico-kube-controllers-5bd9f6d6cd-fck25" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-calico--kube--controllers--5bd9f6d6cd--fck25-eth0" May 8 00:29:16.274386 containerd[2726]: time="2025-05-08T00:29:16.274354210Z" level=info msg="CreateContainer within sandbox \"f63b3ac3372f81101e549a9ff0df7d978225becb6a41389ce256a761843394d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19e07bd2e56fe1fa758fb068e71846fdfa47ce9f3d6a23f3befa411fc0966b40\"" May 8 00:29:16.274798 containerd[2726]: time="2025-05-08T00:29:16.274781113Z" level=info msg="StartContainer for \"19e07bd2e56fe1fa758fb068e71846fdfa47ce9f3d6a23f3befa411fc0966b40\"" May 8 00:29:16.280918 containerd[2726]: time="2025-05-08T00:29:16.280858995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:16.280948 containerd[2726]: time="2025-05-08T00:29:16.280921079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:16.280968 containerd[2726]: time="2025-05-08T00:29:16.280933279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.281029 containerd[2726]: time="2025-05-08T00:29:16.281012243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.306227 systemd[1]: Started cri-containerd-19e07bd2e56fe1fa758fb068e71846fdfa47ce9f3d6a23f3befa411fc0966b40.scope - libcontainer container 19e07bd2e56fe1fa758fb068e71846fdfa47ce9f3d6a23f3befa411fc0966b40. May 8 00:29:16.307641 systemd[1]: Started cri-containerd-bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613.scope - libcontainer container bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613. May 8 00:29:16.307921 kubelet[4251]: I0508 00:29:16.307897 4251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:29:16.324503 containerd[2726]: time="2025-05-08T00:29:16.324468548Z" level=info msg="StartContainer for \"19e07bd2e56fe1fa758fb068e71846fdfa47ce9f3d6a23f3befa411fc0966b40\" returns successfully" May 8 00:29:16.332127 containerd[2726]: time="2025-05-08T00:29:16.332096832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd9f6d6cd-fck25,Uid:d3782ab7-8e17-40bb-8afe-070d834a6209,Namespace:calico-system,Attempt:3,} returns sandbox id \"bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613\"" May 8 00:29:16.360765 systemd-networkd[2631]: cali0b6f108b823: Link UP May 8 00:29:16.360938 systemd-networkd[2631]: cali0b6f108b823: Gained carrier May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:15.757 [INFO][6650] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0 csi-node-driver- calico-system e48e3e91-8128-422e-a541-9325080112a1 585 0 2025-05-08 00:29:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230.1.1-n-e3459bc746 csi-node-driver-qllhv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b6f108b823 [] []}} ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:15.773 [INFO][6650] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:15.807 [INFO][6802] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" HandleID="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Workload="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6802] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" HandleID="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Workload="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000388a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-e3459bc746", "pod":"csi-node-driver-qllhv", "timestamp":"2025-05-08 00:29:15.807983789 +0000 UTC"}, Hostname:"ci-4230.1.1-n-e3459bc746", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:15.817 [INFO][6802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.252 [INFO][6802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.252 [INFO][6802] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-e3459bc746' May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.321 [INFO][6802] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.324 [INFO][6802] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.343 [INFO][6802] ipam/ipam.go 489: Trying affinity for 192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.344 [INFO][6802] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.350 [INFO][6802] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.350 [INFO][6802] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.351 [INFO][6802] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.353 [INFO][6802] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.357 [INFO][6802] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.198/26] block=192.168.18.192/26 handle="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.357 [INFO][6802] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.198/26] handle="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" host="ci-4230.1.1-n-e3459bc746" May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.358 [INFO][6802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:29:16.368346 containerd[2726]: 2025-05-08 00:29:16.358 [INFO][6802] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.198/26] IPv6=[] ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" HandleID="k8s-pod-network.ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Workload="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.368792 containerd[2726]: 2025-05-08 00:29:16.359 [INFO][6650] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e48e3e91-8128-422e-a541-9325080112a1", ResourceVersion:"585", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"", Pod:"csi-node-driver-qllhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b6f108b823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.368792 containerd[2726]: 2025-05-08 00:29:16.359 [INFO][6650] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.198/32] ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.368792 containerd[2726]: 2025-05-08 00:29:16.359 [INFO][6650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b6f108b823 ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.368792 containerd[2726]: 2025-05-08 00:29:16.360 [INFO][6650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.368792 containerd[2726]: 2025-05-08 00:29:16.361 [INFO][6650] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e48e3e91-8128-422e-a541-9325080112a1", ResourceVersion:"585", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-e3459bc746", ContainerID:"ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd", Pod:"csi-node-driver-qllhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b6f108b823", MAC:"52:cc:a3:e3:01:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:29:16.368792 containerd[2726]: 2025-05-08 00:29:16.366 [INFO][6650] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd" Namespace="calico-system" Pod="csi-node-driver-qllhv" WorkloadEndpoint="ci--4230.1.1--n--e3459bc746-k8s-csi--node--driver--qllhv-eth0" May 8 00:29:16.382164 containerd[2726]: time="2025-05-08T00:29:16.382096404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:29:16.382220 containerd[2726]: time="2025-05-08T00:29:16.382158407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:29:16.382220 containerd[2726]: time="2025-05-08T00:29:16.382169248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.382278 containerd[2726]: time="2025-05-08T00:29:16.382248292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:29:16.413184 systemd[1]: Started cri-containerd-ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd.scope - libcontainer container ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd. May 8 00:29:16.429770 containerd[2726]: time="2025-05-08T00:29:16.429740090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qllhv,Uid:e48e3e91-8128-422e-a541-9325080112a1,Namespace:calico-system,Attempt:3,} returns sandbox id \"ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd\"" May 8 00:29:16.550674 containerd[2726]: time="2025-05-08T00:29:16.550566498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:16.550757 containerd[2726]: time="2025-05-08T00:29:16.550594059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 8 00:29:16.551308 containerd[2726]: time="2025-05-08T00:29:16.551287856Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:16.553172 containerd[2726]: time="2025-05-08T00:29:16.553149915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:16.553950 containerd[2726]: time="2025-05-08T00:29:16.553936277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 648.892114ms" May 8 00:29:16.553973 containerd[2726]: time="2025-05-08T00:29:16.553956678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:29:16.554864 containerd[2726]: time="2025-05-08T00:29:16.554844045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:29:16.555761 containerd[2726]: time="2025-05-08T00:29:16.555737892Z" level=info msg="CreateContainer within sandbox \"ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:29:16.560367 containerd[2726]: time="2025-05-08T00:29:16.560340616Z" level=info msg="CreateContainer within sandbox \"ec75092f9c51d5535a8b57f1d61bc0984b1507e55863c46abcc6d2a467b63bab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"412842f154bb3b08ada63fd51ba7d79e6abf3059324a49835a6a3a1882fffaaf\"" May 8 00:29:16.560692 containerd[2726]: time="2025-05-08T00:29:16.560668514Z" level=info msg="StartContainer for \"412842f154bb3b08ada63fd51ba7d79e6abf3059324a49835a6a3a1882fffaaf\"" May 8 00:29:16.589180 systemd[1]: Started cri-containerd-412842f154bb3b08ada63fd51ba7d79e6abf3059324a49835a6a3a1882fffaaf.scope - libcontainer container 412842f154bb3b08ada63fd51ba7d79e6abf3059324a49835a6a3a1882fffaaf. May 8 00:29:16.613902 containerd[2726]: time="2025-05-08T00:29:16.613834973Z" level=info msg="StartContainer for \"412842f154bb3b08ada63fd51ba7d79e6abf3059324a49835a6a3a1882fffaaf\" returns successfully" May 8 00:29:16.614462 containerd[2726]: time="2025-05-08T00:29:16.614443125Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:16.614529 containerd[2726]: time="2025-05-08T00:29:16.614493648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:29:16.616930 containerd[2726]: time="2025-05-08T00:29:16.616900256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 62.026529ms" May 8 00:29:16.616957 containerd[2726]: time="2025-05-08T00:29:16.616932417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:29:16.617524 containerd[2726]: time="2025-05-08T00:29:16.617507008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:29:16.618309 containerd[2726]: time="2025-05-08T00:29:16.618286089Z" level=info msg="CreateContainer within sandbox \"e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:29:16.622921 containerd[2726]: time="2025-05-08T00:29:16.622895814Z" level=info msg="CreateContainer within sandbox \"e6800ee605fde593721e4d85a594649e7765c141b00a60175695ae9b9d76e677\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"be30293084ce8120e8b47417517ef2cc214b91734b2fcbe99bdd3903fe648f47\"" May 8 00:29:16.623236 containerd[2726]: time="2025-05-08T00:29:16.623212430Z" level=info msg="StartContainer for \"be30293084ce8120e8b47417517ef2cc214b91734b2fcbe99bdd3903fe648f47\"" May 8 00:29:16.654181 systemd[1]: Started cri-containerd-be30293084ce8120e8b47417517ef2cc214b91734b2fcbe99bdd3903fe648f47.scope - libcontainer container be30293084ce8120e8b47417517ef2cc214b91734b2fcbe99bdd3903fe648f47. May 8 00:29:16.678894 containerd[2726]: time="2025-05-08T00:29:16.678864982Z" level=info msg="StartContainer for \"be30293084ce8120e8b47417517ef2cc214b91734b2fcbe99bdd3903fe648f47\" returns successfully" May 8 00:29:16.752063 kubelet[4251]: I0508 00:29:16.748690 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bd564c9db-47dls" podStartSLOduration=7.098757755 podStartE2EDuration="7.748674004s" podCreationTimestamp="2025-05-08 00:29:09 +0000 UTC" firstStartedPulling="2025-05-08 00:29:15.90482479 +0000 UTC m=+22.312593382" lastFinishedPulling="2025-05-08 00:29:16.554741039 +0000 UTC m=+22.962509631" observedRunningTime="2025-05-08 00:29:16.748557718 +0000 UTC m=+23.156326310" watchObservedRunningTime="2025-05-08 00:29:16.748674004 +0000 UTC m=+23.156442556" May 8 00:29:16.755607 kubelet[4251]: I0508 00:29:16.755567 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-524hx" podStartSLOduration=16.755553129 podStartE2EDuration="16.755553129s" podCreationTimestamp="2025-05-08 00:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:29:16.755297915 +0000 UTC m=+23.163066507" watchObservedRunningTime="2025-05-08 00:29:16.755553129 +0000 UTC m=+23.163321681" May 8 00:29:16.762230 kubelet[4251]: I0508 00:29:16.762193 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j49s4" podStartSLOduration=16.76217828 podStartE2EDuration="16.76217828s" podCreationTimestamp="2025-05-08 00:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:29:16.762048393 +0000 UTC m=+23.169816985" watchObservedRunningTime="2025-05-08 00:29:16.76217828 +0000 UTC m=+23.169946872" May 8 00:29:16.769046 kubelet[4251]: I0508 00:29:16.768993 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bd564c9db-fvbbx" podStartSLOduration=7.280524858 podStartE2EDuration="7.768981721s" podCreationTimestamp="2025-05-08 00:29:09 +0000 UTC" firstStartedPulling="2025-05-08 00:29:16.128944499 +0000 UTC m=+22.536713051" lastFinishedPulling="2025-05-08 00:29:16.617401322 +0000 UTC m=+23.025169914" observedRunningTime="2025-05-08 00:29:16.768805191 +0000 UTC m=+23.176573743" watchObservedRunningTime="2025-05-08 00:29:16.768981721 +0000 UTC m=+23.176750313" May 8 00:29:17.264373 containerd[2726]: time="2025-05-08T00:29:17.264322592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 8 00:29:17.264373 containerd[2726]: time="2025-05-08T00:29:17.264317312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:17.265118 containerd[2726]: time="2025-05-08T00:29:17.265095751Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:17.266881 containerd[2726]: time="2025-05-08T00:29:17.266859800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:17.267562 containerd[2726]: time="2025-05-08T00:29:17.267539635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 650.007785ms" May 8 00:29:17.267590 containerd[2726]: time="2025-05-08T00:29:17.267567956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 8 00:29:17.268690 containerd[2726]: time="2025-05-08T00:29:17.268292793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:29:17.270059 kernel: bpftool[7630]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:29:17.272931 containerd[2726]: time="2025-05-08T00:29:17.272907185Z" level=info msg="CreateContainer within sandbox \"bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:29:17.277889 containerd[2726]: time="2025-05-08T00:29:17.277864076Z" level=info msg="CreateContainer within sandbox \"bf50cd0f859fcc8e7d547c981ca9df5a987e45a0585efceb18f2f48d02d61613\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"83224a6b8b327b3fb2faa7b0747609c6f9eb36564cfc11a35cd512ed969f34b9\"" May 8 00:29:17.278230 containerd[2726]: time="2025-05-08T00:29:17.278208133Z" level=info msg="StartContainer for \"83224a6b8b327b3fb2faa7b0747609c6f9eb36564cfc11a35cd512ed969f34b9\"" May 8 00:29:17.313212 systemd[1]: Started cri-containerd-83224a6b8b327b3fb2faa7b0747609c6f9eb36564cfc11a35cd512ed969f34b9.scope - libcontainer container 83224a6b8b327b3fb2faa7b0747609c6f9eb36564cfc11a35cd512ed969f34b9. May 8 00:29:17.337950 containerd[2726]: time="2025-05-08T00:29:17.337916066Z" level=info msg="StartContainer for \"83224a6b8b327b3fb2faa7b0747609c6f9eb36564cfc11a35cd512ed969f34b9\" returns successfully" May 8 00:29:17.345133 systemd-networkd[2631]: cali6e6ce15483c: Gained IPv6LL May 8 00:29:17.430865 systemd-networkd[2631]: vxlan.calico: Link UP May 8 00:29:17.430870 systemd-networkd[2631]: vxlan.calico: Gained carrier May 8 00:29:17.604079 containerd[2726]: time="2025-05-08T00:29:17.603962330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:17.604079 containerd[2726]: time="2025-05-08T00:29:17.604018013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 00:29:17.604732 containerd[2726]: time="2025-05-08T00:29:17.604714088Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:17.606543 containerd[2726]: time="2025-05-08T00:29:17.606523780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:17.609475 containerd[2726]: time="2025-05-08T00:29:17.609446007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 341.126614ms" May 8 00:29:17.609527 containerd[2726]: time="2025-05-08T00:29:17.609480089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:29:17.611079 containerd[2726]: time="2025-05-08T00:29:17.611048488Z" level=info msg="CreateContainer within sandbox \"ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:29:17.625884 containerd[2726]: time="2025-05-08T00:29:17.625852075Z" level=info msg="CreateContainer within sandbox \"ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9a07aaf5195870a98387c04c7dae70ee97a0c1e0d806ad6bb7957f24f6c6a0fe\"" May 8 00:29:17.626209 containerd[2726]: time="2025-05-08T00:29:17.626187452Z" level=info msg="StartContainer for \"9a07aaf5195870a98387c04c7dae70ee97a0c1e0d806ad6bb7957f24f6c6a0fe\"" May 8 00:29:17.658192 systemd[1]: Started cri-containerd-9a07aaf5195870a98387c04c7dae70ee97a0c1e0d806ad6bb7957f24f6c6a0fe.scope - libcontainer container 9a07aaf5195870a98387c04c7dae70ee97a0c1e0d806ad6bb7957f24f6c6a0fe. May 8 00:29:17.665145 systemd-networkd[2631]: cali46cf1dad90c: Gained IPv6LL May 8 00:29:17.679056 containerd[2726]: time="2025-05-08T00:29:17.679021798Z" level=info msg="StartContainer for \"9a07aaf5195870a98387c04c7dae70ee97a0c1e0d806ad6bb7957f24f6c6a0fe\" returns successfully" May 8 00:29:17.679888 containerd[2726]: time="2025-05-08T00:29:17.679858840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:29:17.751934 kubelet[4251]: I0508 00:29:17.751888 4251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:29:17.752264 kubelet[4251]: I0508 00:29:17.751946 4251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:29:17.761756 kubelet[4251]: I0508 00:29:17.761713 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bd9f6d6cd-fck25" podStartSLOduration=7.826539264 podStartE2EDuration="8.76169989s" podCreationTimestamp="2025-05-08 00:29:09 +0000 UTC" firstStartedPulling="2025-05-08 00:29:16.333022241 +0000 UTC m=+22.740790833" lastFinishedPulling="2025-05-08 00:29:17.268182867 +0000 UTC m=+23.675951459" observedRunningTime="2025-05-08 00:29:17.760878168 +0000 UTC m=+24.168646800" watchObservedRunningTime="2025-05-08 00:29:17.76169989 +0000 UTC m=+24.169468482" May 8 00:29:17.985163 systemd-networkd[2631]: cali433cbc3cf31: Gained IPv6LL May 8 00:29:17.986109 systemd-networkd[2631]: cali184ca88ce3f: Gained IPv6LL May 8 00:29:18.011283 containerd[2726]: time="2025-05-08T00:29:18.011246698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:18.011399 containerd[2726]: time="2025-05-08T00:29:18.011283339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 00:29:18.012026 containerd[2726]: time="2025-05-08T00:29:18.012007854Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:18.013905 containerd[2726]: time="2025-05-08T00:29:18.013874664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:29:18.014629 containerd[2726]: time="2025-05-08T00:29:18.014596458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 334.705617ms" May 8 00:29:18.014665 containerd[2726]: time="2025-05-08T00:29:18.014632140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:29:18.016213 containerd[2726]: time="2025-05-08T00:29:18.016191695Z" level=info msg="CreateContainer within sandbox \"ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:29:18.021856 containerd[2726]: time="2025-05-08T00:29:18.021828726Z" level=info msg="CreateContainer within sandbox \"ab4996b71611c2902eb7448ab536dc81c234c9719840a95c9c51f6cd0f8bfcbd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a18ba496da5de6da4504a6add38123be9add4bf17848c99d9ae022a09a81975a\"" May 8 00:29:18.022195 containerd[2726]: time="2025-05-08T00:29:18.022173383Z" level=info msg="StartContainer for \"a18ba496da5de6da4504a6add38123be9add4bf17848c99d9ae022a09a81975a\"" May 8 00:29:18.054166 systemd[1]: Started cri-containerd-a18ba496da5de6da4504a6add38123be9add4bf17848c99d9ae022a09a81975a.scope - libcontainer container a18ba496da5de6da4504a6add38123be9add4bf17848c99d9ae022a09a81975a. May 8 00:29:18.074738 containerd[2726]: time="2025-05-08T00:29:18.074706267Z" level=info msg="StartContainer for \"a18ba496da5de6da4504a6add38123be9add4bf17848c99d9ae022a09a81975a\" returns successfully" May 8 00:29:18.113140 systemd-networkd[2631]: cali0b6f108b823: Gained IPv6LL May 8 00:29:18.241127 systemd-networkd[2631]: cali96449c62485: Gained IPv6LL May 8 00:29:18.701873 kubelet[4251]: I0508 00:29:18.701847 4251 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:29:18.701973 kubelet[4251]: I0508 00:29:18.701880 4251 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:29:18.763907 kubelet[4251]: I0508 00:29:18.763865 4251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qllhv" podStartSLOduration=8.179298191 podStartE2EDuration="9.76384982s" podCreationTimestamp="2025-05-08 00:29:09 +0000 UTC" firstStartedPulling="2025-05-08 00:29:16.430640418 +0000 UTC m=+22.838409010" lastFinishedPulling="2025-05-08 00:29:18.015192047 +0000 UTC m=+24.422960639" observedRunningTime="2025-05-08 00:29:18.76363869 +0000 UTC m=+25.171407282" watchObservedRunningTime="2025-05-08 00:29:18.76384982 +0000 UTC m=+25.171618412" May 8 00:29:19.393164 systemd-networkd[2631]: vxlan.calico: Gained IPv6LL May 8 00:29:31.252678 kubelet[4251]: I0508 00:29:31.252619 4251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:29:53.657979 containerd[2726]: time="2025-05-08T00:29:53.657851785Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" May 8 00:29:53.657979 containerd[2726]: time="2025-05-08T00:29:53.657954106Z" level=info msg="TearDown network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" successfully" May 8 00:29:53.657979 containerd[2726]: time="2025-05-08T00:29:53.657963426Z" level=info msg="StopPodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" returns successfully" May 8 00:29:53.658651 containerd[2726]: time="2025-05-08T00:29:53.658260271Z" level=info msg="RemovePodSandbox for \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" May 8 00:29:53.658651 containerd[2726]: time="2025-05-08T00:29:53.658288471Z" level=info msg="Forcibly stopping sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\"" May 8 00:29:53.658651 containerd[2726]: time="2025-05-08T00:29:53.658354072Z" level=info msg="TearDown network for sandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" successfully" May 8 00:29:53.659855 containerd[2726]: time="2025-05-08T00:29:53.659832656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.659900 containerd[2726]: time="2025-05-08T00:29:53.659882256Z" level=info msg="RemovePodSandbox \"9afd6c49e2917e6e736b8ace36a2c194677864aa1e959c883053632abcaddaa0\" returns successfully" May 8 00:29:53.660147 containerd[2726]: time="2025-05-08T00:29:53.660131100Z" level=info msg="StopPodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\"" May 8 00:29:53.660207 containerd[2726]: time="2025-05-08T00:29:53.660196461Z" level=info msg="TearDown network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" successfully" May 8 00:29:53.660236 containerd[2726]: time="2025-05-08T00:29:53.660206821Z" level=info msg="StopPodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" returns successfully" May 8 00:29:53.661029 containerd[2726]: time="2025-05-08T00:29:53.660397904Z" level=info msg="RemovePodSandbox for \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\"" May 8 00:29:53.661029 containerd[2726]: time="2025-05-08T00:29:53.660427265Z" level=info msg="Forcibly stopping sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\"" May 8 00:29:53.661029 containerd[2726]: time="2025-05-08T00:29:53.660491946Z" level=info msg="TearDown network for sandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" successfully" May 8 00:29:53.661795 containerd[2726]: time="2025-05-08T00:29:53.661773886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.661832 containerd[2726]: time="2025-05-08T00:29:53.661817567Z" level=info msg="RemovePodSandbox \"ba56e2e430cdf643ac501b39c9b3574f91b31353901287d8e45ee2a425f7c19e\" returns successfully" May 8 00:29:53.662018 containerd[2726]: time="2025-05-08T00:29:53.662004450Z" level=info msg="StopPodSandbox for \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\"" May 8 00:29:53.662098 containerd[2726]: time="2025-05-08T00:29:53.662086971Z" level=info msg="TearDown network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\" successfully" May 8 00:29:53.662125 containerd[2726]: time="2025-05-08T00:29:53.662097411Z" level=info msg="StopPodSandbox for \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\" returns successfully" May 8 00:29:53.662281 containerd[2726]: time="2025-05-08T00:29:53.662267614Z" level=info msg="RemovePodSandbox for \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\"" May 8 00:29:53.662317 containerd[2726]: time="2025-05-08T00:29:53.662284214Z" level=info msg="Forcibly stopping sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\"" May 8 00:29:53.662340 containerd[2726]: time="2025-05-08T00:29:53.662330015Z" level=info msg="TearDown network for sandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\" successfully" May 8 00:29:53.663606 containerd[2726]: time="2025-05-08T00:29:53.663586634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.663640 containerd[2726]: time="2025-05-08T00:29:53.663625995Z" level=info msg="RemovePodSandbox \"21d800e01fb67a3c0fce454695882c760191177baa675ff69e1976f6b1d3be13\" returns successfully" May 8 00:29:53.663854 containerd[2726]: time="2025-05-08T00:29:53.663834718Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" May 8 00:29:53.663921 containerd[2726]: time="2025-05-08T00:29:53.663910040Z" level=info msg="TearDown network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" successfully" May 8 00:29:53.663949 containerd[2726]: time="2025-05-08T00:29:53.663920880Z" level=info msg="StopPodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" returns successfully" May 8 00:29:53.664190 containerd[2726]: time="2025-05-08T00:29:53.664176324Z" level=info msg="RemovePodSandbox for \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" May 8 00:29:53.664224 containerd[2726]: time="2025-05-08T00:29:53.664195124Z" level=info msg="Forcibly stopping sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\"" May 8 00:29:53.664261 containerd[2726]: time="2025-05-08T00:29:53.664249765Z" level=info msg="TearDown network for sandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" successfully" May 8 00:29:53.665561 containerd[2726]: time="2025-05-08T00:29:53.665504384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.665561 containerd[2726]: time="2025-05-08T00:29:53.665546625Z" level=info msg="RemovePodSandbox \"12947ca6f77c36c78e4167d0d1181f19f1ad6a15c42ed49553c682554a8c9888\" returns successfully" May 8 00:29:53.665953 containerd[2726]: time="2025-05-08T00:29:53.665732668Z" level=info msg="StopPodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\"" May 8 00:29:53.665953 containerd[2726]: time="2025-05-08T00:29:53.665817589Z" level=info msg="TearDown network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" successfully" May 8 00:29:53.665953 containerd[2726]: time="2025-05-08T00:29:53.665828070Z" level=info msg="StopPodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" returns successfully" May 8 00:29:53.666041 containerd[2726]: time="2025-05-08T00:29:53.666021353Z" level=info msg="RemovePodSandbox for \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\"" May 8 00:29:53.666098 containerd[2726]: time="2025-05-08T00:29:53.666045153Z" level=info msg="Forcibly stopping sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\"" May 8 00:29:53.666129 containerd[2726]: time="2025-05-08T00:29:53.666105114Z" level=info msg="TearDown network for sandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" successfully" May 8 00:29:53.669460 containerd[2726]: time="2025-05-08T00:29:53.669430246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.669546 containerd[2726]: time="2025-05-08T00:29:53.669483167Z" level=info msg="RemovePodSandbox \"f9fd1527c8260a82cd3d4944973c5f76e9187a111c51e90a783ca2311fcae921\" returns successfully" May 8 00:29:53.669767 containerd[2726]: time="2025-05-08T00:29:53.669751571Z" level=info msg="StopPodSandbox for \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\"" May 8 00:29:53.669842 containerd[2726]: time="2025-05-08T00:29:53.669830692Z" level=info msg="TearDown network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\" successfully" May 8 00:29:53.669873 containerd[2726]: time="2025-05-08T00:29:53.669841532Z" level=info msg="StopPodSandbox for \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\" returns successfully" May 8 00:29:53.670100 containerd[2726]: time="2025-05-08T00:29:53.670080216Z" level=info msg="RemovePodSandbox for \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\"" May 8 00:29:53.670132 containerd[2726]: time="2025-05-08T00:29:53.670106537Z" level=info msg="Forcibly stopping sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\"" May 8 00:29:53.670191 containerd[2726]: time="2025-05-08T00:29:53.670179218Z" level=info msg="TearDown network for sandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\" successfully" May 8 00:29:53.671486 containerd[2726]: time="2025-05-08T00:29:53.671466438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.671521 containerd[2726]: time="2025-05-08T00:29:53.671511039Z" level=info msg="RemovePodSandbox \"2787919c6aefcd00f7835f6d909cd97dd98a3ee72a86adf2e1bfcc035144018d\" returns successfully" May 8 00:29:53.671726 containerd[2726]: time="2025-05-08T00:29:53.671712682Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" May 8 00:29:53.671793 containerd[2726]: time="2025-05-08T00:29:53.671781963Z" level=info msg="TearDown network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" successfully" May 8 00:29:53.671823 containerd[2726]: time="2025-05-08T00:29:53.671792243Z" level=info msg="StopPodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" returns successfully" May 8 00:29:53.671997 containerd[2726]: time="2025-05-08T00:29:53.671983126Z" level=info msg="RemovePodSandbox for \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" May 8 00:29:53.672020 containerd[2726]: time="2025-05-08T00:29:53.672004006Z" level=info msg="Forcibly stopping sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\"" May 8 00:29:53.672071 containerd[2726]: time="2025-05-08T00:29:53.672061047Z" level=info msg="TearDown network for sandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" successfully" May 8 00:29:53.673312 containerd[2726]: time="2025-05-08T00:29:53.673292347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.673345 containerd[2726]: time="2025-05-08T00:29:53.673337907Z" level=info msg="RemovePodSandbox \"373e32245ca969d94d757fec6a1370454a92795b9ce60ddde0d1bd70b90badcf\" returns successfully" May 8 00:29:53.673589 containerd[2726]: time="2025-05-08T00:29:53.673569111Z" level=info msg="StopPodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\"" May 8 00:29:53.673684 containerd[2726]: time="2025-05-08T00:29:53.673673113Z" level=info msg="TearDown network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" successfully" May 8 00:29:53.673709 containerd[2726]: time="2025-05-08T00:29:53.673685153Z" level=info msg="StopPodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" returns successfully" May 8 00:29:53.673890 containerd[2726]: time="2025-05-08T00:29:53.673871596Z" level=info msg="RemovePodSandbox for \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\"" May 8 00:29:53.673910 containerd[2726]: time="2025-05-08T00:29:53.673896876Z" level=info msg="Forcibly stopping sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\"" May 8 00:29:53.673974 containerd[2726]: time="2025-05-08T00:29:53.673964277Z" level=info msg="TearDown network for sandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" successfully" May 8 00:29:53.675240 containerd[2726]: time="2025-05-08T00:29:53.675213657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.675309 containerd[2726]: time="2025-05-08T00:29:53.675259737Z" level=info msg="RemovePodSandbox \"61eedb10b714bade77c4ff72f5346af84b82c95b4cd0423d47799566b4c3c5d5\" returns successfully" May 8 00:29:53.675447 containerd[2726]: time="2025-05-08T00:29:53.675433100Z" level=info msg="StopPodSandbox for \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\"" May 8 00:29:53.675506 containerd[2726]: time="2025-05-08T00:29:53.675496741Z" level=info msg="TearDown network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\" successfully" May 8 00:29:53.675526 containerd[2726]: time="2025-05-08T00:29:53.675506341Z" level=info msg="StopPodSandbox for \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\" returns successfully" May 8 00:29:53.675695 containerd[2726]: time="2025-05-08T00:29:53.675679704Z" level=info msg="RemovePodSandbox for \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\"" May 8 00:29:53.675717 containerd[2726]: time="2025-05-08T00:29:53.675701824Z" level=info msg="Forcibly stopping sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\"" May 8 00:29:53.675771 containerd[2726]: time="2025-05-08T00:29:53.675762465Z" level=info msg="TearDown network for sandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\" successfully" May 8 00:29:53.676996 containerd[2726]: time="2025-05-08T00:29:53.676977204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.677027 containerd[2726]: time="2025-05-08T00:29:53.677018165Z" level=info msg="RemovePodSandbox \"796e5d26c9c525ece7b279e4d9ce76e62878cbf71843d264c9e891c733d79725\" returns successfully" May 8 00:29:53.677233 containerd[2726]: time="2025-05-08T00:29:53.677218728Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" May 8 00:29:53.677300 containerd[2726]: time="2025-05-08T00:29:53.677289409Z" level=info msg="TearDown network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" successfully" May 8 00:29:53.677323 containerd[2726]: time="2025-05-08T00:29:53.677299889Z" level=info msg="StopPodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" returns successfully" May 8 00:29:53.677483 containerd[2726]: time="2025-05-08T00:29:53.677468452Z" level=info msg="RemovePodSandbox for \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" May 8 00:29:53.677510 containerd[2726]: time="2025-05-08T00:29:53.677488692Z" level=info msg="Forcibly stopping sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\"" May 8 00:29:53.677560 containerd[2726]: time="2025-05-08T00:29:53.677550293Z" level=info msg="TearDown network for sandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" successfully" May 8 00:29:53.678800 containerd[2726]: time="2025-05-08T00:29:53.678780993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.678832 containerd[2726]: time="2025-05-08T00:29:53.678823353Z" level=info msg="RemovePodSandbox \"bfc4dbe1cc4b8a2be055c4557219480cba3356816f433d5d5b29ac6b260dd120\" returns successfully" May 8 00:29:53.679040 containerd[2726]: time="2025-05-08T00:29:53.679027196Z" level=info msg="StopPodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\"" May 8 00:29:53.679113 containerd[2726]: time="2025-05-08T00:29:53.679102558Z" level=info msg="TearDown network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" successfully" May 8 00:29:53.679144 containerd[2726]: time="2025-05-08T00:29:53.679112438Z" level=info msg="StopPodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" returns successfully" May 8 00:29:53.679322 containerd[2726]: time="2025-05-08T00:29:53.679306921Z" level=info msg="RemovePodSandbox for \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\"" May 8 00:29:53.679345 containerd[2726]: time="2025-05-08T00:29:53.679327561Z" level=info msg="Forcibly stopping sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\"" May 8 00:29:53.679396 containerd[2726]: time="2025-05-08T00:29:53.679386002Z" level=info msg="TearDown network for sandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" successfully" May 8 00:29:53.680675 containerd[2726]: time="2025-05-08T00:29:53.680653102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.680710 containerd[2726]: time="2025-05-08T00:29:53.680699023Z" level=info msg="RemovePodSandbox \"1a6e3bc0023d45c24e164e60c8760e844236c1df2f8c5207d6d9ecdd113439e7\" returns successfully" May 8 00:29:53.680959 containerd[2726]: time="2025-05-08T00:29:53.680943387Z" level=info msg="StopPodSandbox for \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\"" May 8 00:29:53.681026 containerd[2726]: time="2025-05-08T00:29:53.681016588Z" level=info msg="TearDown network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\" successfully" May 8 00:29:53.681048 containerd[2726]: time="2025-05-08T00:29:53.681026628Z" level=info msg="StopPodSandbox for \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\" returns successfully" May 8 00:29:53.681204 containerd[2726]: time="2025-05-08T00:29:53.681189790Z" level=info msg="RemovePodSandbox for \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\"" May 8 00:29:53.681225 containerd[2726]: time="2025-05-08T00:29:53.681209591Z" level=info msg="Forcibly stopping sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\"" May 8 00:29:53.681275 containerd[2726]: time="2025-05-08T00:29:53.681266032Z" level=info msg="TearDown network for sandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\" successfully" May 8 00:29:53.682546 containerd[2726]: time="2025-05-08T00:29:53.682521011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.682576 containerd[2726]: time="2025-05-08T00:29:53.682564972Z" level=info msg="RemovePodSandbox \"5a96b32babf249c627f0f0a0eeda70a29103361bbea15b94df61401ccc0c93a3\" returns successfully" May 8 00:29:53.682819 containerd[2726]: time="2025-05-08T00:29:53.682802616Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" May 8 00:29:53.682883 containerd[2726]: time="2025-05-08T00:29:53.682872977Z" level=info msg="TearDown network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" successfully" May 8 00:29:53.682916 containerd[2726]: time="2025-05-08T00:29:53.682883577Z" level=info msg="StopPodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" returns successfully" May 8 00:29:53.683082 containerd[2726]: time="2025-05-08T00:29:53.683068020Z" level=info msg="RemovePodSandbox for \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" May 8 00:29:53.683108 containerd[2726]: time="2025-05-08T00:29:53.683087420Z" level=info msg="Forcibly stopping sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\"" May 8 00:29:53.683152 containerd[2726]: time="2025-05-08T00:29:53.683142621Z" level=info msg="TearDown network for sandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" successfully" May 8 00:29:53.684394 containerd[2726]: time="2025-05-08T00:29:53.684369760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.684450 containerd[2726]: time="2025-05-08T00:29:53.684434921Z" level=info msg="RemovePodSandbox \"b45f55deb0829db88c2ffc4dbe5facf33a69085e91b4750d372b13ae1f121188\" returns successfully" May 8 00:29:53.684886 containerd[2726]: time="2025-05-08T00:29:53.684871488Z" level=info msg="StopPodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\"" May 8 00:29:53.684962 containerd[2726]: time="2025-05-08T00:29:53.684951769Z" level=info msg="TearDown network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" successfully" May 8 00:29:53.684985 containerd[2726]: time="2025-05-08T00:29:53.684963330Z" level=info msg="StopPodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" returns successfully" May 8 00:29:53.685197 containerd[2726]: time="2025-05-08T00:29:53.685181453Z" level=info msg="RemovePodSandbox for \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\"" May 8 00:29:53.685221 containerd[2726]: time="2025-05-08T00:29:53.685202533Z" level=info msg="Forcibly stopping sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\"" May 8 00:29:53.685272 containerd[2726]: time="2025-05-08T00:29:53.685262814Z" level=info msg="TearDown network for sandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" successfully" May 8 00:29:53.686479 containerd[2726]: time="2025-05-08T00:29:53.686457713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.686514 containerd[2726]: time="2025-05-08T00:29:53.686504074Z" level=info msg="RemovePodSandbox \"d5fc0c93a6eb2066c5fba14dac08af42220ef9b2b10b9d661961874f23e01e8a\" returns successfully" May 8 00:29:53.686721 containerd[2726]: time="2025-05-08T00:29:53.686707317Z" level=info msg="StopPodSandbox for \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\"" May 8 00:29:53.686783 containerd[2726]: time="2025-05-08T00:29:53.686773518Z" level=info msg="TearDown network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\" successfully" May 8 00:29:53.686805 containerd[2726]: time="2025-05-08T00:29:53.686783518Z" level=info msg="StopPodSandbox for \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\" returns successfully" May 8 00:29:53.686999 containerd[2726]: time="2025-05-08T00:29:53.686983481Z" level=info msg="RemovePodSandbox for \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\"" May 8 00:29:53.687021 containerd[2726]: time="2025-05-08T00:29:53.687004642Z" level=info msg="Forcibly stopping sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\"" May 8 00:29:53.687080 containerd[2726]: time="2025-05-08T00:29:53.687069883Z" level=info msg="TearDown network for sandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\" successfully" May 8 00:29:53.688272 containerd[2726]: time="2025-05-08T00:29:53.688250181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.688323 containerd[2726]: time="2025-05-08T00:29:53.688295542Z" level=info msg="RemovePodSandbox \"34f5f326aa396d2f7e05eff508cd6eaef4b408a2dab5cd2235a270ca7845b3b6\" returns successfully" May 8 00:29:53.688508 containerd[2726]: time="2025-05-08T00:29:53.688494785Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" May 8 00:29:53.688569 containerd[2726]: time="2025-05-08T00:29:53.688559626Z" level=info msg="TearDown network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" successfully" May 8 00:29:53.688591 containerd[2726]: time="2025-05-08T00:29:53.688570106Z" level=info msg="StopPodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" returns successfully" May 8 00:29:53.688758 containerd[2726]: time="2025-05-08T00:29:53.688744469Z" level=info msg="RemovePodSandbox for \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" May 8 00:29:53.688780 containerd[2726]: time="2025-05-08T00:29:53.688764869Z" level=info msg="Forcibly stopping sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\"" May 8 00:29:53.688838 containerd[2726]: time="2025-05-08T00:29:53.688827710Z" level=info msg="TearDown network for sandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" successfully" May 8 00:29:53.690019 containerd[2726]: time="2025-05-08T00:29:53.689999808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.690056 containerd[2726]: time="2025-05-08T00:29:53.690042729Z" level=info msg="RemovePodSandbox \"8bedc45a22b5b8cfc40f1709cdd7ca6b140ceb6ed6427e53f3a3f571345845c3\" returns successfully" May 8 00:29:53.690252 containerd[2726]: time="2025-05-08T00:29:53.690238012Z" level=info msg="StopPodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\"" May 8 00:29:53.690316 containerd[2726]: time="2025-05-08T00:29:53.690307013Z" level=info msg="TearDown network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" successfully" May 8 00:29:53.690338 containerd[2726]: time="2025-05-08T00:29:53.690316493Z" level=info msg="StopPodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" returns successfully" May 8 00:29:53.690524 containerd[2726]: time="2025-05-08T00:29:53.690507936Z" level=info msg="RemovePodSandbox for \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\"" May 8 00:29:53.690544 containerd[2726]: time="2025-05-08T00:29:53.690529697Z" level=info msg="Forcibly stopping sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\"" May 8 00:29:53.690594 containerd[2726]: time="2025-05-08T00:29:53.690584498Z" level=info msg="TearDown network for sandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" successfully" May 8 00:29:53.691839 containerd[2726]: time="2025-05-08T00:29:53.691816717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.691879 containerd[2726]: time="2025-05-08T00:29:53.691866118Z" level=info msg="RemovePodSandbox \"20c770433d465b47cde0b654deae7cfd04fee0a8155e6e48e6173e1c8bf40544\" returns successfully" May 8 00:29:53.692078 containerd[2726]: time="2025-05-08T00:29:53.692062561Z" level=info msg="StopPodSandbox for \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\"" May 8 00:29:53.692147 containerd[2726]: time="2025-05-08T00:29:53.692135602Z" level=info msg="TearDown network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\" successfully" May 8 00:29:53.692147 containerd[2726]: time="2025-05-08T00:29:53.692145402Z" level=info msg="StopPodSandbox for \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\" returns successfully" May 8 00:29:53.692338 containerd[2726]: time="2025-05-08T00:29:53.692320605Z" level=info msg="RemovePodSandbox for \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\"" May 8 00:29:53.692368 containerd[2726]: time="2025-05-08T00:29:53.692340045Z" level=info msg="Forcibly stopping sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\"" May 8 00:29:53.692407 containerd[2726]: time="2025-05-08T00:29:53.692394766Z" level=info msg="TearDown network for sandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\" successfully" May 8 00:29:53.693656 containerd[2726]: time="2025-05-08T00:29:53.693633065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:29:53.693691 containerd[2726]: time="2025-05-08T00:29:53.693679946Z" level=info msg="RemovePodSandbox \"812ca0fab47907e7be57759d2f3b784a867f6004c1a668d1bebea93091fd72b6\" returns successfully" May 8 00:29:58.520320 kubelet[4251]: I0508 00:29:58.520273 4251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:30:23.391591 systemd[1]: sshd@0-147.28.129.25:22-218.92.0.158:15370.service: Deactivated successfully. May 8 00:30:49.545936 systemd[1]: Started sshd@8-147.28.129.25:22-218.92.0.158:18555.service - OpenSSH per-connection server daemon (218.92.0.158:18555). May 8 00:31:26.071068 systemd[1]: Started sshd@9-147.28.129.25:22-80.94.95.116:53240.service - OpenSSH per-connection server daemon (80.94.95.116:53240). May 8 00:31:27.718061 sshd[8497]: Connection closed by authenticating user root 80.94.95.116 port 53240 [preauth] May 8 00:31:27.719939 systemd[1]: sshd@9-147.28.129.25:22-80.94.95.116:53240.service: Deactivated successfully. May 8 00:31:38.246953 systemd[1]: Started sshd@10-147.28.129.25:22-124.74.9.190:53757.service - OpenSSH per-connection server daemon (124.74.9.190:53757). May 8 00:31:41.606164 sshd-session[8512]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.74.9.190 user=news May 8 00:31:43.919306 sshd[8510]: PAM: Permission denied for news from 124.74.9.190 May 8 00:31:44.666397 sshd[8510]: Connection closed by authenticating user news 124.74.9.190 port 53757 [preauth] May 8 00:31:44.668521 systemd[1]: sshd@10-147.28.129.25:22-124.74.9.190:53757.service: Deactivated successfully. May 8 00:32:49.566875 systemd[1]: sshd@8-147.28.129.25:22-218.92.0.158:18555.service: Deactivated successfully. May 8 00:33:09.580024 systemd[1]: Started sshd@11-147.28.129.25:22-195.149.108.100:60946.service - OpenSSH per-connection server daemon (195.149.108.100:60946). May 8 00:33:09.650738 systemd[1]: Started sshd@12-147.28.129.25:22-218.92.0.158:23705.service - OpenSSH per-connection server daemon (218.92.0.158:23705). May 8 00:33:10.230791 update_engine[2720]: I20250508 00:33:10.230737 2720 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 00:33:10.230791 update_engine[2720]: I20250508 00:33:10.230783 2720 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 00:33:10.231179 update_engine[2720]: I20250508 00:33:10.230985 2720 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 00:33:10.231340 update_engine[2720]: I20250508 00:33:10.231323 2720 omaha_request_params.cc:62] Current group set to beta May 8 00:33:10.231409 update_engine[2720]: I20250508 00:33:10.231397 2720 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 00:33:10.231430 update_engine[2720]: I20250508 00:33:10.231406 2720 update_attempter.cc:643] Scheduling an action processor start. May 8 00:33:10.231430 update_engine[2720]: I20250508 00:33:10.231420 2720 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:33:10.231470 update_engine[2720]: I20250508 00:33:10.231448 2720 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 00:33:10.231508 update_engine[2720]: I20250508 00:33:10.231496 2720 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:33:10.231529 update_engine[2720]: I20250508 00:33:10.231505 2720 omaha_request_action.cc:272] Request: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: May 8 00:33:10.231529 update_engine[2720]: I20250508 00:33:10.231510 2720 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:33:10.231701 locksmithd[2752]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 00:33:10.232452 update_engine[2720]: I20250508 00:33:10.232435 2720 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:33:10.232739 update_engine[2720]: I20250508 00:33:10.232718 2720 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:33:10.233275 update_engine[2720]: E20250508 00:33:10.233258 2720 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:33:10.233317 update_engine[2720]: I20250508 00:33:10.233305 2720 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 00:33:10.686175 sshd[8752]: Invalid user oracle from 195.149.108.100 port 60946 May 8 00:33:10.956542 sshd-session[8756]: pam_faillock(sshd:auth): User unknown May 8 00:33:10.960867 sshd[8752]: Postponed keyboard-interactive for invalid user oracle from 195.149.108.100 port 60946 ssh2 [preauth] May 8 00:33:11.191357 sshd-session[8756]: pam_unix(sshd:auth): check pass; user unknown May 8 00:33:11.191387 sshd-session[8756]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=195.149.108.100 May 8 00:33:11.191666 sshd-session[8756]: pam_faillock(sshd:auth): User unknown May 8 00:33:13.524788 sshd[8752]: PAM: Permission denied for illegal user oracle from 195.149.108.100 May 8 00:33:13.525175 sshd[8752]: Failed keyboard-interactive/pam for invalid user oracle from 195.149.108.100 port 60946 ssh2 May 8 00:33:13.782741 sshd[8752]: Connection closed by invalid user oracle 195.149.108.100 port 60946 [preauth] May 8 00:33:13.784676 systemd[1]: sshd@11-147.28.129.25:22-195.149.108.100:60946.service: Deactivated successfully. May 8 00:33:20.140326 update_engine[2720]: I20250508 00:33:20.140255 2720 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:33:20.140976 update_engine[2720]: I20250508 00:33:20.140542 2720 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:33:20.140976 update_engine[2720]: I20250508 00:33:20.140760 2720 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:33:20.141250 update_engine[2720]: E20250508 00:33:20.141229 2720 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:33:20.141282 update_engine[2720]: I20250508 00:33:20.141270 2720 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 00:33:30.137134 update_engine[2720]: I20250508 00:33:30.137040 2720 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:33:30.137603 update_engine[2720]: I20250508 00:33:30.137310 2720 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:33:30.137603 update_engine[2720]: I20250508 00:33:30.137518 2720 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:33:30.137965 update_engine[2720]: E20250508 00:33:30.137946 2720 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:33:30.137989 update_engine[2720]: I20250508 00:33:30.137980 2720 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 00:33:31.489042 systemd[1]: Started sshd@13-147.28.129.25:22-134.199.221.21:49974.service - OpenSSH per-connection server daemon (134.199.221.21:49974). May 8 00:33:31.937087 sshd[8834]: Connection closed by authenticating user root 134.199.221.21 port 49974 [preauth] May 8 00:33:31.939482 systemd[1]: sshd@13-147.28.129.25:22-134.199.221.21:49974.service: Deactivated successfully. May 8 00:33:40.137433 update_engine[2720]: I20250508 00:33:40.137357 2720 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:33:40.137799 update_engine[2720]: I20250508 00:33:40.137599 2720 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:33:40.137832 update_engine[2720]: I20250508 00:33:40.137816 2720 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:33:40.138244 update_engine[2720]: E20250508 00:33:40.138226 2720 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:33:40.138275 update_engine[2720]: I20250508 00:33:40.138262 2720 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:33:40.138275 update_engine[2720]: I20250508 00:33:40.138269 2720 omaha_request_action.cc:617] Omaha request response: May 8 00:33:40.138352 update_engine[2720]: E20250508 00:33:40.138340 2720 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 00:33:40.138374 update_engine[2720]: I20250508 00:33:40.138356 2720 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 00:33:40.138374 update_engine[2720]: I20250508 00:33:40.138362 2720 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:33:40.138374 update_engine[2720]: I20250508 00:33:40.138366 2720 update_attempter.cc:306] Processing Done. May 8 00:33:40.138433 update_engine[2720]: E20250508 00:33:40.138379 2720 update_attempter.cc:619] Update failed. May 8 00:33:40.138433 update_engine[2720]: I20250508 00:33:40.138384 2720 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 00:33:40.138433 update_engine[2720]: I20250508 00:33:40.138389 2720 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 00:33:40.138433 update_engine[2720]: I20250508 00:33:40.138394 2720 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 00:33:40.138512 update_engine[2720]: I20250508 00:33:40.138451 2720 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:33:40.138512 update_engine[2720]: I20250508 00:33:40.138470 2720 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:33:40.138512 update_engine[2720]: I20250508 00:33:40.138475 2720 omaha_request_action.cc:272] Request: May 8 00:33:40.138512 update_engine[2720]: May 8 00:33:40.138512 update_engine[2720]: May 8 00:33:40.138512 update_engine[2720]: May 8 00:33:40.138512 update_engine[2720]: May 8 00:33:40.138512 update_engine[2720]: May 8 00:33:40.138512 update_engine[2720]: May 8 00:33:40.138512 update_engine[2720]: I20250508 00:33:40.138480 2720 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:33:40.138683 update_engine[2720]: I20250508 00:33:40.138594 2720 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:33:40.138710 locksmithd[2752]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 00:33:40.138890 update_engine[2720]: I20250508 00:33:40.138757 2720 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:33:40.139109 update_engine[2720]: E20250508 00:33:40.139091 2720 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139122 2720 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139130 2720 omaha_request_action.cc:617] Omaha request response: May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139135 2720 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139139 2720 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139143 2720 update_attempter.cc:306] Processing Done. May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139148 2720 update_attempter.cc:310] Error event sent. May 8 00:33:40.139178 update_engine[2720]: I20250508 00:33:40.139155 2720 update_check_scheduler.cc:74] Next update check in 47m59s May 8 00:33:40.139312 locksmithd[2752]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 00:33:51.109122 systemd[1]: Started sshd@14-147.28.129.25:22-101.100.184.80:36140.service - OpenSSH per-connection server daemon (101.100.184.80:36140). May 8 00:33:54.486289 sshd-session[8913]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=101.100.184.80 user=news May 8 00:33:56.723934 sshd[8890]: PAM: Permission denied for news from 101.100.184.80 May 8 00:33:57.358155 sshd[8890]: Connection closed by authenticating user news 101.100.184.80 port 36140 [preauth] May 8 00:33:57.360065 systemd[1]: sshd@14-147.28.129.25:22-101.100.184.80:36140.service: Deactivated successfully. May 8 00:35:09.670181 systemd[1]: sshd@12-147.28.129.25:22-218.92.0.158:23705.service: Deactivated successfully. May 8 00:35:38.136048 systemd[1]: Started sshd@15-147.28.129.25:22-218.92.0.158:41654.service - OpenSSH per-connection server daemon (218.92.0.158:41654). May 8 00:35:42.698000 systemd[1]: Started sshd@16-147.28.129.25:22-218.92.0.201:16224.service - OpenSSH per-connection server daemon (218.92.0.201:16224). May 8 00:35:42.922826 sshd[9161]: Unable to negotiate with 218.92.0.201 port 16224: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 8 00:35:42.924398 systemd[1]: sshd@16-147.28.129.25:22-218.92.0.201:16224.service: Deactivated successfully. May 8 00:37:00.633074 systemd[1]: Started sshd@17-147.28.129.25:22-139.178.68.195:51068.service - OpenSSH per-connection server daemon (139.178.68.195:51068). May 8 00:37:01.060969 sshd[9364]: Accepted publickey for core from 139.178.68.195 port 51068 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:01.062063 sshd-session[9364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:01.065554 systemd-logind[2709]: New session 10 of user core. May 8 00:37:01.074201 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:37:01.419834 sshd[9368]: Connection closed by 139.178.68.195 port 51068 May 8 00:37:01.420168 sshd-session[9364]: pam_unix(sshd:session): session closed for user core May 8 00:37:01.423212 systemd[1]: sshd@17-147.28.129.25:22-139.178.68.195:51068.service: Deactivated successfully. May 8 00:37:01.425455 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:37:01.425991 systemd-logind[2709]: Session 10 logged out. Waiting for processes to exit. May 8 00:37:01.426561 systemd-logind[2709]: Removed session 10. May 8 00:37:06.497009 systemd[1]: Started sshd@18-147.28.129.25:22-139.178.68.195:47786.service - OpenSSH per-connection server daemon (139.178.68.195:47786). May 8 00:37:06.913435 sshd[9402]: Accepted publickey for core from 139.178.68.195 port 47786 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:06.914469 sshd-session[9402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:06.917518 systemd-logind[2709]: New session 11 of user core. May 8 00:37:06.927154 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:37:07.259851 sshd[9404]: Connection closed by 139.178.68.195 port 47786 May 8 00:37:07.260238 sshd-session[9402]: pam_unix(sshd:session): session closed for user core May 8 00:37:07.263001 systemd[1]: sshd@18-147.28.129.25:22-139.178.68.195:47786.service: Deactivated successfully. May 8 00:37:07.264682 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:37:07.265245 systemd-logind[2709]: Session 11 logged out. Waiting for processes to exit. May 8 00:37:07.265778 systemd-logind[2709]: Removed session 11. May 8 00:37:07.332938 systemd[1]: Started sshd@19-147.28.129.25:22-139.178.68.195:47794.service - OpenSSH per-connection server daemon (139.178.68.195:47794). May 8 00:37:07.760491 sshd[9444]: Accepted publickey for core from 139.178.68.195 port 47794 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:07.761517 sshd-session[9444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:07.764481 systemd-logind[2709]: New session 12 of user core. May 8 00:37:07.774158 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:37:08.138341 sshd[9446]: Connection closed by 139.178.68.195 port 47794 May 8 00:37:08.138638 sshd-session[9444]: pam_unix(sshd:session): session closed for user core May 8 00:37:08.141528 systemd[1]: sshd@19-147.28.129.25:22-139.178.68.195:47794.service: Deactivated successfully. May 8 00:37:08.143206 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:37:08.143766 systemd-logind[2709]: Session 12 logged out. Waiting for processes to exit. May 8 00:37:08.144318 systemd-logind[2709]: Removed session 12. May 8 00:37:08.208918 systemd[1]: Started sshd@20-147.28.129.25:22-139.178.68.195:47806.service - OpenSSH per-connection server daemon (139.178.68.195:47806). May 8 00:37:08.624398 sshd[9482]: Accepted publickey for core from 139.178.68.195 port 47806 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:08.625392 sshd-session[9482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:08.628482 systemd-logind[2709]: New session 13 of user core. May 8 00:37:08.647230 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:37:08.974320 sshd[9484]: Connection closed by 139.178.68.195 port 47806 May 8 00:37:08.974723 sshd-session[9482]: pam_unix(sshd:session): session closed for user core May 8 00:37:08.977887 systemd[1]: sshd@20-147.28.129.25:22-139.178.68.195:47806.service: Deactivated successfully. May 8 00:37:08.980218 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:37:08.980781 systemd-logind[2709]: Session 13 logged out. Waiting for processes to exit. May 8 00:37:08.981331 systemd-logind[2709]: Removed session 13. May 8 00:37:14.057095 systemd[1]: Started sshd@21-147.28.129.25:22-139.178.68.195:47808.service - OpenSSH per-connection server daemon (139.178.68.195:47808). May 8 00:37:14.496765 sshd[9523]: Accepted publickey for core from 139.178.68.195 port 47808 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:14.497786 sshd-session[9523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:14.501026 systemd-logind[2709]: New session 14 of user core. May 8 00:37:14.516145 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:37:14.856963 sshd[9540]: Connection closed by 139.178.68.195 port 47808 May 8 00:37:14.857379 sshd-session[9523]: pam_unix(sshd:session): session closed for user core May 8 00:37:14.860270 systemd[1]: sshd@21-147.28.129.25:22-139.178.68.195:47808.service: Deactivated successfully. May 8 00:37:14.861940 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:37:14.862486 systemd-logind[2709]: Session 14 logged out. Waiting for processes to exit. May 8 00:37:14.863021 systemd-logind[2709]: Removed session 14. May 8 00:37:14.928034 systemd[1]: Started sshd@22-147.28.129.25:22-139.178.68.195:47820.service - OpenSSH per-connection server daemon (139.178.68.195:47820). May 8 00:37:15.343390 sshd[9576]: Accepted publickey for core from 139.178.68.195 port 47820 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:15.344512 sshd-session[9576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:15.347710 systemd-logind[2709]: New session 15 of user core. May 8 00:37:15.357154 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:37:15.746431 sshd[9581]: Connection closed by 139.178.68.195 port 47820 May 8 00:37:15.746743 sshd-session[9576]: pam_unix(sshd:session): session closed for user core May 8 00:37:15.749659 systemd[1]: sshd@22-147.28.129.25:22-139.178.68.195:47820.service: Deactivated successfully. May 8 00:37:15.751382 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:37:15.751930 systemd-logind[2709]: Session 15 logged out. Waiting for processes to exit. May 8 00:37:15.752488 systemd-logind[2709]: Removed session 15. May 8 00:37:15.820026 systemd[1]: Started sshd@23-147.28.129.25:22-139.178.68.195:41714.service - OpenSSH per-connection server daemon (139.178.68.195:41714). May 8 00:37:16.251529 sshd[9615]: Accepted publickey for core from 139.178.68.195 port 41714 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:16.252593 sshd-session[9615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:16.255817 systemd-logind[2709]: New session 16 of user core. May 8 00:37:16.269159 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:37:16.944424 sshd[9617]: Connection closed by 139.178.68.195 port 41714 May 8 00:37:16.944821 sshd-session[9615]: pam_unix(sshd:session): session closed for user core May 8 00:37:16.947783 systemd[1]: sshd@23-147.28.129.25:22-139.178.68.195:41714.service: Deactivated successfully. May 8 00:37:16.949493 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:37:16.950090 systemd-logind[2709]: Session 16 logged out. Waiting for processes to exit. May 8 00:37:16.950670 systemd-logind[2709]: Removed session 16. May 8 00:37:17.019890 systemd[1]: Started sshd@24-147.28.129.25:22-139.178.68.195:41726.service - OpenSSH per-connection server daemon (139.178.68.195:41726). May 8 00:37:17.452642 sshd[9703]: Accepted publickey for core from 139.178.68.195 port 41726 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:17.453693 sshd-session[9703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:17.456966 systemd-logind[2709]: New session 17 of user core. May 8 00:37:17.466155 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:37:17.895508 sshd[9705]: Connection closed by 139.178.68.195 port 41726 May 8 00:37:17.895865 sshd-session[9703]: pam_unix(sshd:session): session closed for user core May 8 00:37:17.898798 systemd[1]: sshd@24-147.28.129.25:22-139.178.68.195:41726.service: Deactivated successfully. May 8 00:37:17.900479 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:37:17.900988 systemd-logind[2709]: Session 17 logged out. Waiting for processes to exit. May 8 00:37:17.901553 systemd-logind[2709]: Removed session 17. May 8 00:37:17.968888 systemd[1]: Started sshd@25-147.28.129.25:22-139.178.68.195:41742.service - OpenSSH per-connection server daemon (139.178.68.195:41742). May 8 00:37:18.401027 sshd[9756]: Accepted publickey for core from 139.178.68.195 port 41742 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:18.402110 sshd-session[9756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:18.405169 systemd-logind[2709]: New session 18 of user core. May 8 00:37:18.416155 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:37:18.752892 sshd[9758]: Connection closed by 139.178.68.195 port 41742 May 8 00:37:18.753322 sshd-session[9756]: pam_unix(sshd:session): session closed for user core May 8 00:37:18.756064 systemd[1]: sshd@25-147.28.129.25:22-139.178.68.195:41742.service: Deactivated successfully. May 8 00:37:18.758365 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:37:18.758938 systemd-logind[2709]: Session 18 logged out. Waiting for processes to exit. May 8 00:37:18.759513 systemd-logind[2709]: Removed session 18. May 8 00:37:23.832136 systemd[1]: Started sshd@26-147.28.129.25:22-139.178.68.195:41748.service - OpenSSH per-connection server daemon (139.178.68.195:41748). May 8 00:37:24.261713 sshd[9816]: Accepted publickey for core from 139.178.68.195 port 41748 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:24.262781 sshd-session[9816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:24.265869 systemd-logind[2709]: New session 19 of user core. May 8 00:37:24.276207 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:37:24.616484 sshd[9823]: Connection closed by 139.178.68.195 port 41748 May 8 00:37:24.616833 sshd-session[9816]: pam_unix(sshd:session): session closed for user core May 8 00:37:24.619757 systemd[1]: sshd@26-147.28.129.25:22-139.178.68.195:41748.service: Deactivated successfully. May 8 00:37:24.621477 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:37:24.622039 systemd-logind[2709]: Session 19 logged out. Waiting for processes to exit. May 8 00:37:24.622607 systemd-logind[2709]: Removed session 19. May 8 00:37:29.696015 systemd[1]: Started sshd@27-147.28.129.25:22-139.178.68.195:47870.service - OpenSSH per-connection server daemon (139.178.68.195:47870). May 8 00:37:30.126391 sshd[9860]: Accepted publickey for core from 139.178.68.195 port 47870 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:30.127466 sshd-session[9860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:30.130512 systemd-logind[2709]: New session 20 of user core. May 8 00:37:30.142148 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:37:30.479432 sshd[9862]: Connection closed by 139.178.68.195 port 47870 May 8 00:37:30.479777 sshd-session[9860]: pam_unix(sshd:session): session closed for user core May 8 00:37:30.482689 systemd[1]: sshd@27-147.28.129.25:22-139.178.68.195:47870.service: Deactivated successfully. May 8 00:37:30.484398 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:37:30.485460 systemd-logind[2709]: Session 20 logged out. Waiting for processes to exit. May 8 00:37:30.486075 systemd-logind[2709]: Removed session 20. May 8 00:37:35.554090 systemd[1]: Started sshd@28-147.28.129.25:22-139.178.68.195:58376.service - OpenSSH per-connection server daemon (139.178.68.195:58376). May 8 00:37:35.983476 sshd[9900]: Accepted publickey for core from 139.178.68.195 port 58376 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:37:35.984448 sshd-session[9900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:35.987499 systemd-logind[2709]: New session 21 of user core. May 8 00:37:35.997150 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:37:36.336809 sshd[9902]: Connection closed by 139.178.68.195 port 58376 May 8 00:37:36.337173 sshd-session[9900]: pam_unix(sshd:session): session closed for user core May 8 00:37:36.340057 systemd[1]: sshd@28-147.28.129.25:22-139.178.68.195:58376.service: Deactivated successfully. May 8 00:37:36.341767 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:37:36.342344 systemd-logind[2709]: Session 21 logged out. Waiting for processes to exit. May 8 00:37:36.342910 systemd-logind[2709]: Removed session 21.