May 16 00:51:24.177693 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 16 00:51:24.177716 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 22:19:24 -00 2025 May 16 00:51:24.177724 kernel: KASLR enabled May 16 00:51:24.177730 kernel: efi: EFI v2.7 by American Megatrends May 16 00:51:24.177735 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea451818 RNG=0xebf10018 MEMRESERVE=0xe4632f98 May 16 00:51:24.177741 kernel: random: crng init done May 16 00:51:24.177748 kernel: secureboot: Secure boot disabled May 16 00:51:24.177754 kernel: esrt: Reserving ESRT space from 0x00000000ea451818 to 0x00000000ea451878. May 16 00:51:24.177761 kernel: ACPI: Early table checksum verification disabled May 16 00:51:24.177767 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 16 00:51:24.177773 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 16 00:51:24.177779 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 16 00:51:24.177785 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 16 00:51:24.177791 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 16 00:51:24.177799 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 16 00:51:24.177805 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 16 00:51:24.177812 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 16 00:51:24.177818 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 16 00:51:24.177824 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 16 00:51:24.177830 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 16 00:51:24.177837 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177843 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177849 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177855 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177863 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 16 00:51:24.177869 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 16 00:51:24.177875 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 16 00:51:24.177881 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 16 00:51:24.177887 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 16 00:51:24.177894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 16 00:51:24.177900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 16 00:51:24.177906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 16 00:51:24.177912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 16 00:51:24.177918 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 16 00:51:24.177924 kernel: Zone ranges: May 16 00:51:24.177932 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 16 00:51:24.177938 kernel: DMA32 empty May 16 00:51:24.177944 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 16 00:51:24.177950 kernel: Movable zone start for each node May 16 00:51:24.177957 kernel: Early memory node ranges May 16 00:51:24.177965 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 16 00:51:24.177972 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 16 00:51:24.177980 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 16 00:51:24.177987 kernel: node 0: [mem 0x0000000094000000-0x00000000eba31fff] May 16 00:51:24.177993 kernel: node 0: [mem 0x00000000eba32000-0x00000000ebea8fff] May 16 00:51:24.178000 kernel: node 0: [mem 0x00000000ebea9000-0x00000000ebeaefff] May 16 00:51:24.178006 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 16 00:51:24.178013 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 16 00:51:24.178019 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 16 00:51:24.178026 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 16 00:51:24.178032 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 16 00:51:24.178039 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 16 00:51:24.178047 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 16 00:51:24.178054 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 16 00:51:24.178060 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 16 00:51:24.178066 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 16 00:51:24.178073 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 16 00:51:24.178080 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 16 00:51:24.178086 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 16 00:51:24.178092 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 16 00:51:24.178099 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 16 00:51:24.178106 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 16 00:51:24.178112 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 16 00:51:24.178120 kernel: psci: probing for conduit method from ACPI. May 16 00:51:24.178127 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:51:24.178133 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:51:24.178140 kernel: psci: MIGRATE_INFO_TYPE not supported. May 16 00:51:24.178146 kernel: psci: SMC Calling Convention v1.2 May 16 00:51:24.178156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 16 00:51:24.178163 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 16 00:51:24.178170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 16 00:51:24.178177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 16 00:51:24.178183 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 16 00:51:24.178190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 16 00:51:24.178196 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 16 00:51:24.178204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 16 00:51:24.178211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 16 00:51:24.178217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 16 00:51:24.178224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 16 00:51:24.178231 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 16 00:51:24.178237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 16 00:51:24.178244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 16 00:51:24.178250 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 16 00:51:24.178257 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 16 00:51:24.178263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 16 00:51:24.178270 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 16 00:51:24.178276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 16 00:51:24.178284 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 16 00:51:24.178291 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 16 00:51:24.178297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 16 00:51:24.178304 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 16 00:51:24.178310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 16 00:51:24.178317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 16 00:51:24.178323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 16 00:51:24.178330 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 16 00:51:24.178336 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 16 00:51:24.178343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 16 00:51:24.178349 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 16 00:51:24.178357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 16 00:51:24.178364 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 16 00:51:24.178371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 16 00:51:24.178377 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 16 00:51:24.178384 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 16 00:51:24.178390 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 16 00:51:24.178397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 16 00:51:24.178404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 16 00:51:24.178410 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 16 00:51:24.178417 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 16 00:51:24.178423 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 16 00:51:24.178430 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 16 00:51:24.178437 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 16 00:51:24.178444 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 16 00:51:24.178450 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 16 00:51:24.178457 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 16 00:51:24.178463 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 16 00:51:24.178470 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 16 00:51:24.178476 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 16 00:51:24.178483 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 16 00:51:24.178497 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 16 00:51:24.178504 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 16 00:51:24.178512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 16 00:51:24.178519 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 16 00:51:24.178527 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 16 00:51:24.178534 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 16 00:51:24.178540 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 16 00:51:24.178547 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 16 00:51:24.178556 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 16 00:51:24.178563 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 16 00:51:24.178570 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 16 00:51:24.178576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 16 00:51:24.178584 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 16 00:51:24.178590 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 16 00:51:24.178597 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 16 00:51:24.178604 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 16 00:51:24.178611 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 16 00:51:24.178618 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 16 00:51:24.178625 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 16 00:51:24.178632 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 16 00:51:24.178640 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 16 00:51:24.178647 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 16 00:51:24.178654 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 16 00:51:24.178661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 16 00:51:24.178668 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 16 00:51:24.178675 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 16 00:51:24.178682 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 16 00:51:24.178689 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 16 00:51:24.178696 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 16 00:51:24.178703 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 16 00:51:24.178710 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 16 00:51:24.178718 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 16 00:51:24.178726 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 16 00:51:24.178733 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 16 00:51:24.178740 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 16 00:51:24.178746 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 16 00:51:24.178753 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 16 00:51:24.178760 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 16 00:51:24.178767 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 16 00:51:24.178774 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 16 00:51:24.178781 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 16 00:51:24.178788 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 16 00:51:24.178796 kernel: Detected PIPT I-cache on CPU0 May 16 00:51:24.178803 kernel: CPU features: detected: GIC system register CPU interface May 16 00:51:24.178810 kernel: CPU features: detected: Virtualization Host Extensions May 16 00:51:24.178817 kernel: CPU features: detected: Hardware dirty bit management May 16 00:51:24.178824 kernel: CPU features: detected: Spectre-v4 May 16 00:51:24.178831 kernel: CPU features: detected: Spectre-BHB May 16 00:51:24.178838 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:51:24.178845 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:51:24.178852 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:51:24.178859 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:51:24.178866 kernel: alternatives: applying boot alternatives May 16 00:51:24.178874 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 16 00:51:24.178882 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:51:24.178889 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 16 00:51:24.178896 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 16 00:51:24.178903 kernel: printk: log_buf_len min size: 262144 bytes May 16 00:51:24.178910 kernel: printk: log_buf_len: 1048576 bytes May 16 00:51:24.178917 kernel: printk: early log buf free: 249864(95%) May 16 00:51:24.178924 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 16 00:51:24.178932 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 16 00:51:24.178938 kernel: Fallback order for Node 0: 0 May 16 00:51:24.178946 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 16 00:51:24.178954 kernel: Policy zone: Normal May 16 00:51:24.178961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:51:24.178968 kernel: software IO TLB: area num 128. May 16 00:51:24.178975 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 16 00:51:24.178982 kernel: Memory: 262922200K/268174336K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 5252136K reserved, 0K cma-reserved) May 16 00:51:24.178989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 16 00:51:24.178996 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:51:24.179004 kernel: rcu: RCU event tracing is enabled. May 16 00:51:24.179011 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 16 00:51:24.179019 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:51:24.179026 kernel: Tracing variant of Tasks RCU enabled. May 16 00:51:24.179033 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:51:24.179041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 16 00:51:24.179048 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:51:24.179055 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 16 00:51:24.179062 kernel: GICv3: 672 SPIs implemented May 16 00:51:24.179069 kernel: GICv3: 0 Extended SPIs implemented May 16 00:51:24.179076 kernel: Root IRQ handler: gic_handle_irq May 16 00:51:24.179083 kernel: GICv3: GICv3 features: 16 PPIs May 16 00:51:24.179090 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 16 00:51:24.179096 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 16 00:51:24.179103 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 16 00:51:24.179110 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 16 00:51:24.179117 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 16 00:51:24.179125 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 16 00:51:24.179132 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 16 00:51:24.179139 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 16 00:51:24.179146 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 16 00:51:24.179154 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 16 00:51:24.179162 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179169 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179176 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 16 00:51:24.179184 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179191 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179198 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 16 00:51:24.179206 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179214 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179221 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 16 00:51:24.179228 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179235 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179242 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 16 00:51:24.179249 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179256 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179263 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 16 00:51:24.179270 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179277 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179285 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 16 00:51:24.179292 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179299 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179306 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 16 00:51:24.179313 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179320 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179328 kernel: GICv3: using LPI property table @0x00000800003e0000 May 16 00:51:24.179335 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 16 00:51:24.179342 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:51:24.179349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179356 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 16 00:51:24.179364 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 16 00:51:24.179372 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:51:24.179379 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:51:24.179386 kernel: Console: colour dummy device 80x25 May 16 00:51:24.179393 kernel: printk: console [tty0] enabled May 16 00:51:24.179401 kernel: ACPI: Core revision 20230628 May 16 00:51:24.179408 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:51:24.179415 kernel: pid_max: default: 81920 minimum: 640 May 16 00:51:24.179423 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:51:24.179430 kernel: landlock: Up and running. May 16 00:51:24.179438 kernel: SELinux: Initializing. May 16 00:51:24.179445 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:51:24.179453 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:51:24.179460 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 16 00:51:24.179467 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 16 00:51:24.179475 kernel: rcu: Hierarchical SRCU implementation. May 16 00:51:24.179482 kernel: rcu: Max phase no-delay instances is 400. May 16 00:51:24.179489 kernel: Platform MSI: ITS@0x100100040000 domain created May 16 00:51:24.179496 kernel: Platform MSI: ITS@0x100100060000 domain created May 16 00:51:24.179504 kernel: Platform MSI: ITS@0x100100080000 domain created May 16 00:51:24.179511 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 16 00:51:24.179519 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 16 00:51:24.179526 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 16 00:51:24.179533 kernel: Platform MSI: ITS@0x100100100000 domain created May 16 00:51:24.179540 kernel: Platform MSI: ITS@0x100100120000 domain created May 16 00:51:24.179547 kernel: PCI/MSI: ITS@0x100100040000 domain created May 16 00:51:24.179554 kernel: PCI/MSI: ITS@0x100100060000 domain created May 16 00:51:24.179561 kernel: PCI/MSI: ITS@0x100100080000 domain created May 16 00:51:24.179570 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 16 00:51:24.179577 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 16 00:51:24.179584 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 16 00:51:24.179591 kernel: PCI/MSI: ITS@0x100100100000 domain created May 16 00:51:24.179598 kernel: PCI/MSI: ITS@0x100100120000 domain created May 16 00:51:24.179605 kernel: Remapping and enabling EFI services. May 16 00:51:24.179612 kernel: smp: Bringing up secondary CPUs ... May 16 00:51:24.179619 kernel: Detected PIPT I-cache on CPU1 May 16 00:51:24.179626 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 16 00:51:24.179634 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 16 00:51:24.179642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179649 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 16 00:51:24.179656 kernel: Detected PIPT I-cache on CPU2 May 16 00:51:24.179663 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 16 00:51:24.179671 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 16 00:51:24.179678 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179685 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 16 00:51:24.179692 kernel: Detected PIPT I-cache on CPU3 May 16 00:51:24.179699 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 16 00:51:24.179707 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 16 00:51:24.179715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179722 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 16 00:51:24.179729 kernel: Detected PIPT I-cache on CPU4 May 16 00:51:24.179736 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 16 00:51:24.179743 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 16 00:51:24.179750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179757 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 16 00:51:24.179764 kernel: Detected PIPT I-cache on CPU5 May 16 00:51:24.179771 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 16 00:51:24.179780 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 16 00:51:24.179787 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179794 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 16 00:51:24.179801 kernel: Detected PIPT I-cache on CPU6 May 16 00:51:24.179809 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 16 00:51:24.179816 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 16 00:51:24.179823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179830 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 16 00:51:24.179837 kernel: Detected PIPT I-cache on CPU7 May 16 00:51:24.179846 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 16 00:51:24.179853 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 16 00:51:24.179860 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179867 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 16 00:51:24.179874 kernel: Detected PIPT I-cache on CPU8 May 16 00:51:24.179882 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 16 00:51:24.179889 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 16 00:51:24.179896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179903 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 16 00:51:24.179910 kernel: Detected PIPT I-cache on CPU9 May 16 00:51:24.179919 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 16 00:51:24.179926 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 16 00:51:24.179933 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179940 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 16 00:51:24.179947 kernel: Detected PIPT I-cache on CPU10 May 16 00:51:24.179954 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 16 00:51:24.179962 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 16 00:51:24.179969 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179976 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 16 00:51:24.179984 kernel: Detected PIPT I-cache on CPU11 May 16 00:51:24.179991 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 16 00:51:24.179998 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 16 00:51:24.180005 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180012 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 16 00:51:24.180019 kernel: Detected PIPT I-cache on CPU12 May 16 00:51:24.180027 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 16 00:51:24.180034 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 16 00:51:24.180041 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180048 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 16 00:51:24.180056 kernel: Detected PIPT I-cache on CPU13 May 16 00:51:24.180063 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 16 00:51:24.180071 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 16 00:51:24.180078 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180085 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 16 00:51:24.180092 kernel: Detected PIPT I-cache on CPU14 May 16 00:51:24.180099 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 16 00:51:24.180107 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 16 00:51:24.180114 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180122 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 16 00:51:24.180129 kernel: Detected PIPT I-cache on CPU15 May 16 00:51:24.180137 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 16 00:51:24.180144 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 16 00:51:24.180151 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180160 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 16 00:51:24.180167 kernel: Detected PIPT I-cache on CPU16 May 16 00:51:24.180174 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 16 00:51:24.180182 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 16 00:51:24.180199 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180207 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 16 00:51:24.180215 kernel: Detected PIPT I-cache on CPU17 May 16 00:51:24.180222 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 16 00:51:24.180230 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 16 00:51:24.180237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180245 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 16 00:51:24.180252 kernel: Detected PIPT I-cache on CPU18 May 16 00:51:24.180260 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 16 00:51:24.180267 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 16 00:51:24.180276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180284 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 16 00:51:24.180291 kernel: Detected PIPT I-cache on CPU19 May 16 00:51:24.180298 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 16 00:51:24.180306 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 16 00:51:24.180313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180323 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 16 00:51:24.180330 kernel: Detected PIPT I-cache on CPU20 May 16 00:51:24.180338 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 16 00:51:24.180345 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 16 00:51:24.180353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180360 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 16 00:51:24.180368 kernel: Detected PIPT I-cache on CPU21 May 16 00:51:24.180375 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 16 00:51:24.180383 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 16 00:51:24.180392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180399 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 16 00:51:24.180407 kernel: Detected PIPT I-cache on CPU22 May 16 00:51:24.180414 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 16 00:51:24.180422 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 16 00:51:24.180429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180437 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 16 00:51:24.180444 kernel: Detected PIPT I-cache on CPU23 May 16 00:51:24.180452 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 16 00:51:24.180460 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 16 00:51:24.180468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180476 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 16 00:51:24.180483 kernel: Detected PIPT I-cache on CPU24 May 16 00:51:24.180491 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 16 00:51:24.180498 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 16 00:51:24.180506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180513 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 16 00:51:24.180521 kernel: Detected PIPT I-cache on CPU25 May 16 00:51:24.180529 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 16 00:51:24.180537 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 16 00:51:24.180546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180555 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 16 00:51:24.180562 kernel: Detected PIPT I-cache on CPU26 May 16 00:51:24.180570 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 16 00:51:24.180578 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 16 00:51:24.180585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180592 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 16 00:51:24.180600 kernel: Detected PIPT I-cache on CPU27 May 16 00:51:24.180609 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 16 00:51:24.180616 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 16 00:51:24.180624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180631 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 16 00:51:24.180639 kernel: Detected PIPT I-cache on CPU28 May 16 00:51:24.180646 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 16 00:51:24.180654 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 16 00:51:24.180662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180669 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 16 00:51:24.180676 kernel: Detected PIPT I-cache on CPU29 May 16 00:51:24.180685 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 16 00:51:24.180693 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 16 00:51:24.180700 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180708 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 16 00:51:24.180715 kernel: Detected PIPT I-cache on CPU30 May 16 00:51:24.180723 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 16 00:51:24.180730 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 16 00:51:24.180738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180745 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 16 00:51:24.180754 kernel: Detected PIPT I-cache on CPU31 May 16 00:51:24.180762 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 16 00:51:24.180770 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 16 00:51:24.180777 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180784 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 16 00:51:24.180792 kernel: Detected PIPT I-cache on CPU32 May 16 00:51:24.180799 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 16 00:51:24.180807 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 16 00:51:24.180814 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180822 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 16 00:51:24.180831 kernel: Detected PIPT I-cache on CPU33 May 16 00:51:24.180838 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 16 00:51:24.180846 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 16 00:51:24.180854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180861 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 16 00:51:24.180868 kernel: Detected PIPT I-cache on CPU34 May 16 00:51:24.180876 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 16 00:51:24.180884 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 16 00:51:24.180891 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180900 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 16 00:51:24.180908 kernel: Detected PIPT I-cache on CPU35 May 16 00:51:24.180915 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 16 00:51:24.180923 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 16 00:51:24.180931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180938 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 16 00:51:24.180945 kernel: Detected PIPT I-cache on CPU36 May 16 00:51:24.180953 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 16 00:51:24.180960 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 16 00:51:24.180968 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180977 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 16 00:51:24.180984 kernel: Detected PIPT I-cache on CPU37 May 16 00:51:24.180992 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 16 00:51:24.180999 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 16 00:51:24.181007 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181014 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 16 00:51:24.181022 kernel: Detected PIPT I-cache on CPU38 May 16 00:51:24.181029 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 16 00:51:24.181037 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 16 00:51:24.181046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181053 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 16 00:51:24.181060 kernel: Detected PIPT I-cache on CPU39 May 16 00:51:24.181069 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 16 00:51:24.181077 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 16 00:51:24.181084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181092 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 16 00:51:24.181099 kernel: Detected PIPT I-cache on CPU40 May 16 00:51:24.181108 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 16 00:51:24.181116 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 16 00:51:24.181123 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181131 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 16 00:51:24.181138 kernel: Detected PIPT I-cache on CPU41 May 16 00:51:24.181145 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 16 00:51:24.181219 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 16 00:51:24.181228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181236 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 16 00:51:24.181243 kernel: Detected PIPT I-cache on CPU42 May 16 00:51:24.181253 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 16 00:51:24.181261 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 16 00:51:24.181268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181276 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 16 00:51:24.181283 kernel: Detected PIPT I-cache on CPU43 May 16 00:51:24.181291 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 16 00:51:24.181298 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 16 00:51:24.181306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181314 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 16 00:51:24.181323 kernel: Detected PIPT I-cache on CPU44 May 16 00:51:24.181330 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 16 00:51:24.181338 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 16 00:51:24.181346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181353 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 16 00:51:24.181360 kernel: Detected PIPT I-cache on CPU45 May 16 00:51:24.181368 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 16 00:51:24.181375 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 16 00:51:24.181383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181391 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 16 00:51:24.181400 kernel: Detected PIPT I-cache on CPU46 May 16 00:51:24.181408 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 16 00:51:24.181415 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 16 00:51:24.181423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181430 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 16 00:51:24.181438 kernel: Detected PIPT I-cache on CPU47 May 16 00:51:24.181445 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 16 00:51:24.181453 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 16 00:51:24.181461 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181470 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 16 00:51:24.181477 kernel: Detected PIPT I-cache on CPU48 May 16 00:51:24.181485 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 16 00:51:24.181492 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 16 00:51:24.181500 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181507 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 16 00:51:24.181515 kernel: Detected PIPT I-cache on CPU49 May 16 00:51:24.181522 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 16 00:51:24.181530 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 16 00:51:24.181539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181546 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 16 00:51:24.181554 kernel: Detected PIPT I-cache on CPU50 May 16 00:51:24.181561 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 16 00:51:24.181569 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 16 00:51:24.181576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181584 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 16 00:51:24.181592 kernel: Detected PIPT I-cache on CPU51 May 16 00:51:24.181600 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 16 00:51:24.181608 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 16 00:51:24.181617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181624 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 16 00:51:24.181632 kernel: Detected PIPT I-cache on CPU52 May 16 00:51:24.181639 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 16 00:51:24.181647 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 16 00:51:24.181654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181662 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 16 00:51:24.181670 kernel: Detected PIPT I-cache on CPU53 May 16 00:51:24.181677 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 16 00:51:24.181686 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 16 00:51:24.181694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181701 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 16 00:51:24.181709 kernel: Detected PIPT I-cache on CPU54 May 16 00:51:24.181716 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 16 00:51:24.181724 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 16 00:51:24.181731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181739 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 16 00:51:24.181746 kernel: Detected PIPT I-cache on CPU55 May 16 00:51:24.181754 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 16 00:51:24.181763 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 16 00:51:24.181770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181778 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 16 00:51:24.181785 kernel: Detected PIPT I-cache on CPU56 May 16 00:51:24.181793 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 16 00:51:24.181800 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 16 00:51:24.181808 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181816 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 16 00:51:24.181824 kernel: Detected PIPT I-cache on CPU57 May 16 00:51:24.181832 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 16 00:51:24.181840 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 16 00:51:24.181848 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181855 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 16 00:51:24.181862 kernel: Detected PIPT I-cache on CPU58 May 16 00:51:24.181870 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 16 00:51:24.181878 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 16 00:51:24.181885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181893 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 16 00:51:24.181900 kernel: Detected PIPT I-cache on CPU59 May 16 00:51:24.181909 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 16 00:51:24.181917 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 16 00:51:24.181924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181932 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 16 00:51:24.181939 kernel: Detected PIPT I-cache on CPU60 May 16 00:51:24.181947 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 16 00:51:24.181954 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 16 00:51:24.181962 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181969 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 16 00:51:24.181978 kernel: Detected PIPT I-cache on CPU61 May 16 00:51:24.181986 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 16 00:51:24.181993 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 16 00:51:24.182001 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182008 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 16 00:51:24.182016 kernel: Detected PIPT I-cache on CPU62 May 16 00:51:24.182023 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 16 00:51:24.182031 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 16 00:51:24.182039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182046 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 16 00:51:24.182055 kernel: Detected PIPT I-cache on CPU63 May 16 00:51:24.182063 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 16 00:51:24.182071 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 16 00:51:24.182078 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182086 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 16 00:51:24.182093 kernel: Detected PIPT I-cache on CPU64 May 16 00:51:24.182100 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 16 00:51:24.182108 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 16 00:51:24.182116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182124 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 16 00:51:24.182132 kernel: Detected PIPT I-cache on CPU65 May 16 00:51:24.182139 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 16 00:51:24.182147 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 16 00:51:24.182157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182165 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 16 00:51:24.182172 kernel: Detected PIPT I-cache on CPU66 May 16 00:51:24.182180 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 16 00:51:24.182187 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 16 00:51:24.182196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182204 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 16 00:51:24.182211 kernel: Detected PIPT I-cache on CPU67 May 16 00:51:24.182219 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 16 00:51:24.182227 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 16 00:51:24.182234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182242 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 16 00:51:24.182249 kernel: Detected PIPT I-cache on CPU68 May 16 00:51:24.182257 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 16 00:51:24.182264 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 16 00:51:24.182273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182281 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 16 00:51:24.182288 kernel: Detected PIPT I-cache on CPU69 May 16 00:51:24.182296 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 16 00:51:24.182304 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 16 00:51:24.182311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182319 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 16 00:51:24.182326 kernel: Detected PIPT I-cache on CPU70 May 16 00:51:24.182333 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 16 00:51:24.182343 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 16 00:51:24.182350 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182358 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 16 00:51:24.182365 kernel: Detected PIPT I-cache on CPU71 May 16 00:51:24.182373 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 16 00:51:24.182380 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 16 00:51:24.182388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182395 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 16 00:51:24.182403 kernel: Detected PIPT I-cache on CPU72 May 16 00:51:24.182410 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 16 00:51:24.182419 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 16 00:51:24.182427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182434 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 16 00:51:24.182442 kernel: Detected PIPT I-cache on CPU73 May 16 00:51:24.182449 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 16 00:51:24.182457 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 16 00:51:24.182464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182472 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 16 00:51:24.182479 kernel: Detected PIPT I-cache on CPU74 May 16 00:51:24.182488 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 16 00:51:24.182495 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 16 00:51:24.182503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182511 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 16 00:51:24.182518 kernel: Detected PIPT I-cache on CPU75 May 16 00:51:24.182526 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 16 00:51:24.182533 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 16 00:51:24.182541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182548 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 16 00:51:24.182556 kernel: Detected PIPT I-cache on CPU76 May 16 00:51:24.182565 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 16 00:51:24.182572 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 16 00:51:24.182580 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182587 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 16 00:51:24.182595 kernel: Detected PIPT I-cache on CPU77 May 16 00:51:24.182602 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 16 00:51:24.182610 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 16 00:51:24.182617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182625 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 16 00:51:24.182634 kernel: Detected PIPT I-cache on CPU78 May 16 00:51:24.182641 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 16 00:51:24.182649 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 16 00:51:24.182656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182664 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 16 00:51:24.182671 kernel: Detected PIPT I-cache on CPU79 May 16 00:51:24.182678 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 16 00:51:24.182686 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 16 00:51:24.182694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182701 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 16 00:51:24.182710 kernel: smp: Brought up 1 node, 80 CPUs May 16 00:51:24.182718 kernel: SMP: Total of 80 processors activated. May 16 00:51:24.182725 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:51:24.182733 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:51:24.182740 kernel: CPU features: detected: Common not Private translations May 16 00:51:24.182748 kernel: CPU features: detected: CRC32 instructions May 16 00:51:24.182755 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 00:51:24.182763 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:51:24.182770 kernel: CPU features: detected: LSE atomic instructions May 16 00:51:24.182779 kernel: CPU features: detected: Privileged Access Never May 16 00:51:24.182787 kernel: CPU features: detected: RAS Extension Support May 16 00:51:24.182794 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:51:24.182801 kernel: CPU: All CPU(s) started at EL2 May 16 00:51:24.182809 kernel: alternatives: applying system-wide alternatives May 16 00:51:24.182817 kernel: devtmpfs: initialized May 16 00:51:24.182824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:51:24.182832 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 16 00:51:24.182839 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:51:24.182848 kernel: SMBIOS 3.4.0 present. May 16 00:51:24.182856 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 16 00:51:24.182863 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:51:24.182871 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 16 00:51:24.182878 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:51:24.182886 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:51:24.182893 kernel: audit: initializing netlink subsys (disabled) May 16 00:51:24.182901 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 16 00:51:24.182909 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:51:24.182917 kernel: cpuidle: using governor menu May 16 00:51:24.182925 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:51:24.182932 kernel: ASID allocator initialised with 32768 entries May 16 00:51:24.182939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:51:24.182947 kernel: Serial: AMBA PL011 UART driver May 16 00:51:24.182954 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 00:51:24.182962 kernel: Modules: 0 pages in range for non-PLT usage May 16 00:51:24.182969 kernel: Modules: 508944 pages in range for PLT usage May 16 00:51:24.182977 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:51:24.182986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:51:24.182993 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:51:24.183001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 00:51:24.183008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:51:24.183016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:51:24.183023 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:51:24.183031 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 00:51:24.183038 kernel: ACPI: Added _OSI(Module Device) May 16 00:51:24.183046 kernel: ACPI: Added _OSI(Processor Device) May 16 00:51:24.183054 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:51:24.183062 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:51:24.183069 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 16 00:51:24.183077 kernel: ACPI: Interpreter enabled May 16 00:51:24.183084 kernel: ACPI: Using GIC for interrupt routing May 16 00:51:24.183092 kernel: ACPI: MCFG table detected, 8 entries May 16 00:51:24.183099 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183107 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183114 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183124 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183131 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183139 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183147 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183156 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183164 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 16 00:51:24.183171 kernel: printk: console [ttyAMA0] enabled May 16 00:51:24.183179 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 16 00:51:24.183188 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 16 00:51:24.183316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.183389 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.183452 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.183513 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.183573 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.183633 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 16 00:51:24.183645 kernel: PCI host bridge to bus 000d:00 May 16 00:51:24.183716 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 16 00:51:24.183772 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 16 00:51:24.183828 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 16 00:51:24.183906 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.183981 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.184048 kernel: pci 000d:00:01.0: enabling Extended Tags May 16 00:51:24.184114 kernel: pci 000d:00:01.0: supports D1 D2 May 16 00:51:24.184183 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184258 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.184324 kernel: pci 000d:00:02.0: supports D1 D2 May 16 00:51:24.184387 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184461 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.184528 kernel: pci 000d:00:03.0: supports D1 D2 May 16 00:51:24.184594 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184665 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.184730 kernel: pci 000d:00:04.0: supports D1 D2 May 16 00:51:24.184793 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184803 kernel: acpiphp: Slot [1] registered May 16 00:51:24.184810 kernel: acpiphp: Slot [2] registered May 16 00:51:24.184820 kernel: acpiphp: Slot [3] registered May 16 00:51:24.184827 kernel: acpiphp: Slot [4] registered May 16 00:51:24.184885 kernel: pci_bus 000d:00: on NUMA node 0 May 16 00:51:24.184950 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.185016 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.185082 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.185147 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.185215 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.185282 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.185347 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.185412 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.185476 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.185540 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.185605 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.185669 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.185736 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 16 00:51:24.185800 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 16 00:51:24.185864 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 16 00:51:24.185928 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 16 00:51:24.185993 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 16 00:51:24.186057 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 16 00:51:24.186121 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 16 00:51:24.186192 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 16 00:51:24.186257 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186321 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186386 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186449 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186513 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186576 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186641 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186708 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186771 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186835 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186901 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186966 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.187029 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.187094 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.187161 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.187228 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.187292 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.187356 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 16 00:51:24.187420 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 16 00:51:24.187485 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.187548 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 16 00:51:24.187613 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 16 00:51:24.187680 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.187743 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 16 00:51:24.187808 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 16 00:51:24.187872 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.187936 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 16 00:51:24.187999 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 16 00:51:24.188061 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 16 00:51:24.188117 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 16 00:51:24.188192 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 16 00:51:24.188252 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 16 00:51:24.188320 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 16 00:51:24.188380 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 16 00:51:24.188460 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 16 00:51:24.188518 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 16 00:51:24.188585 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 16 00:51:24.188644 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 16 00:51:24.188654 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 16 00:51:24.188723 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.188788 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.188849 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.188911 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.188971 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 16 00:51:24.189032 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 16 00:51:24.189042 kernel: PCI host bridge to bus 0000:00 May 16 00:51:24.189109 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 16 00:51:24.189172 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 16 00:51:24.189229 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:51:24.189301 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.189373 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.189438 kernel: pci 0000:00:01.0: enabling Extended Tags May 16 00:51:24.189502 kernel: pci 0000:00:01.0: supports D1 D2 May 16 00:51:24.189567 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.189638 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.189703 kernel: pci 0000:00:02.0: supports D1 D2 May 16 00:51:24.189768 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.189840 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.189905 kernel: pci 0000:00:03.0: supports D1 D2 May 16 00:51:24.189968 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.190038 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.190104 kernel: pci 0000:00:04.0: supports D1 D2 May 16 00:51:24.190172 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.190182 kernel: acpiphp: Slot [1-1] registered May 16 00:51:24.190190 kernel: acpiphp: Slot [2-1] registered May 16 00:51:24.190197 kernel: acpiphp: Slot [3-1] registered May 16 00:51:24.190205 kernel: acpiphp: Slot [4-1] registered May 16 00:51:24.190259 kernel: pci_bus 0000:00: on NUMA node 0 May 16 00:51:24.190325 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.190391 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.190456 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.190519 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.190584 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.190648 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.190712 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.190775 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.190842 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.190906 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.190969 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.191034 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.191098 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 16 00:51:24.191167 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 16 00:51:24.191230 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 16 00:51:24.191298 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 16 00:51:24.191361 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 16 00:51:24.191426 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 16 00:51:24.191490 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 16 00:51:24.191554 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 16 00:51:24.191617 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.191681 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.191744 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.191810 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.191874 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.191938 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192002 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192065 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192129 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192197 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192264 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192329 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192393 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192457 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192521 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192583 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192648 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.192711 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 16 00:51:24.192775 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 16 00:51:24.192841 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.192904 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 16 00:51:24.192969 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 16 00:51:24.193032 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.193099 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 16 00:51:24.193165 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 16 00:51:24.193231 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.193293 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 16 00:51:24.193358 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 16 00:51:24.193416 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 16 00:51:24.193476 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 16 00:51:24.193544 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 16 00:51:24.193604 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 16 00:51:24.193670 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 16 00:51:24.193729 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 16 00:51:24.193803 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 16 00:51:24.193865 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 16 00:51:24.193931 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 16 00:51:24.193991 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 16 00:51:24.194001 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 16 00:51:24.194071 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.194133 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.194202 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.194263 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.194325 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 16 00:51:24.194386 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 16 00:51:24.194396 kernel: PCI host bridge to bus 0005:00 May 16 00:51:24.194460 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 16 00:51:24.194517 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 16 00:51:24.194576 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 16 00:51:24.194647 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.194719 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.194783 kernel: pci 0005:00:01.0: supports D1 D2 May 16 00:51:24.194847 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.194918 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.194982 kernel: pci 0005:00:03.0: supports D1 D2 May 16 00:51:24.195049 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.195121 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.195191 kernel: pci 0005:00:05.0: supports D1 D2 May 16 00:51:24.195255 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.195329 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 16 00:51:24.195394 kernel: pci 0005:00:07.0: supports D1 D2 May 16 00:51:24.195458 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 16 00:51:24.195471 kernel: acpiphp: Slot [1-2] registered May 16 00:51:24.195478 kernel: acpiphp: Slot [2-2] registered May 16 00:51:24.195552 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 16 00:51:24.195620 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 16 00:51:24.195686 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 16 00:51:24.195760 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 16 00:51:24.195827 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 16 00:51:24.195895 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 16 00:51:24.195954 kernel: pci_bus 0005:00: on NUMA node 0 May 16 00:51:24.196019 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.196085 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.196149 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.196221 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.196285 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.196353 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.196421 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.196496 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.196604 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 16 00:51:24.196675 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.196741 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.196815 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 16 00:51:24.196881 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 16 00:51:24.196945 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 16 00:51:24.197010 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 16 00:51:24.197073 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 16 00:51:24.197138 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 16 00:51:24.197206 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 16 00:51:24.197271 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 16 00:51:24.197337 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 16 00:51:24.197402 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197465 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197531 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197597 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197662 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197725 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197789 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197856 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197920 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197984 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198046 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.198110 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198177 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.198242 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198306 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.198371 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198438 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.198503 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 16 00:51:24.198568 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 16 00:51:24.198633 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 16 00:51:24.198697 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 16 00:51:24.198762 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 16 00:51:24.198834 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 16 00:51:24.198899 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 16 00:51:24.198965 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 16 00:51:24.199028 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 16 00:51:24.199092 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 16 00:51:24.199162 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 16 00:51:24.199229 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 16 00:51:24.199295 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 16 00:51:24.199360 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 16 00:51:24.199424 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 16 00:51:24.199499 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 16 00:51:24.199556 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 16 00:51:24.199626 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 16 00:51:24.199686 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 16 00:51:24.199764 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 16 00:51:24.199824 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 16 00:51:24.199889 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 16 00:51:24.199951 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 16 00:51:24.200017 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 16 00:51:24.200080 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 16 00:51:24.200090 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 16 00:51:24.200162 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.200227 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.200288 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.200352 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.200413 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.200477 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 16 00:51:24.200487 kernel: PCI host bridge to bus 0003:00 May 16 00:51:24.200553 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 16 00:51:24.200611 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 16 00:51:24.200669 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 16 00:51:24.200740 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.200813 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.200881 kernel: pci 0003:00:01.0: supports D1 D2 May 16 00:51:24.200946 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.201016 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.201082 kernel: pci 0003:00:03.0: supports D1 D2 May 16 00:51:24.201146 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.201221 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.201288 kernel: pci 0003:00:05.0: supports D1 D2 May 16 00:51:24.201354 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.201364 kernel: acpiphp: Slot [1-3] registered May 16 00:51:24.201372 kernel: acpiphp: Slot [2-3] registered May 16 00:51:24.201444 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 16 00:51:24.201512 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 16 00:51:24.201578 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 16 00:51:24.201643 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 16 00:51:24.201711 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 16 00:51:24.201776 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 16 00:51:24.201844 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 16 00:51:24.201909 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 16 00:51:24.201976 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 16 00:51:24.202042 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 16 00:51:24.202116 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 16 00:51:24.202191 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 16 00:51:24.202262 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 16 00:51:24.202334 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 16 00:51:24.202405 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 16 00:51:24.202475 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 16 00:51:24.202542 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 16 00:51:24.202609 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 16 00:51:24.202678 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 16 00:51:24.202736 kernel: pci_bus 0003:00: on NUMA node 0 May 16 00:51:24.202802 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.202866 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.202930 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.202995 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.203060 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.203123 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.203194 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 16 00:51:24.203257 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 16 00:51:24.203323 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 16 00:51:24.203397 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 16 00:51:24.203464 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 16 00:51:24.203527 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 16 00:51:24.203591 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 16 00:51:24.203658 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 16 00:51:24.203722 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.203789 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.203852 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.203917 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.203980 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204046 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204109 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204276 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204344 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204407 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204470 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204532 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204594 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.204657 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 16 00:51:24.204719 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 16 00:51:24.204786 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 16 00:51:24.204851 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 16 00:51:24.204916 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 16 00:51:24.204982 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 16 00:51:24.205049 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 16 00:51:24.205113 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 16 00:51:24.205186 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 16 00:51:24.205251 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 16 00:51:24.205317 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 16 00:51:24.205382 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 16 00:51:24.205448 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 16 00:51:24.205514 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205578 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.205644 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205709 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.205774 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205837 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.205902 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205967 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.206030 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 16 00:51:24.206092 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 16 00:51:24.206160 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 16 00:51:24.206219 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 16 00:51:24.206275 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 16 00:51:24.206331 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 16 00:51:24.206406 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 16 00:51:24.206466 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 16 00:51:24.206534 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 16 00:51:24.206593 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 16 00:51:24.206657 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 16 00:51:24.206716 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 16 00:51:24.206726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 16 00:51:24.206793 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.206858 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.206919 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.206979 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.207040 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.207100 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 16 00:51:24.207111 kernel: PCI host bridge to bus 000c:00 May 16 00:51:24.207178 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 16 00:51:24.207237 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 16 00:51:24.207294 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 16 00:51:24.207364 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.207436 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.207500 kernel: pci 000c:00:01.0: enabling Extended Tags May 16 00:51:24.207564 kernel: pci 000c:00:01.0: supports D1 D2 May 16 00:51:24.207626 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.207698 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.207762 kernel: pci 000c:00:02.0: supports D1 D2 May 16 00:51:24.207825 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.207896 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.207959 kernel: pci 000c:00:03.0: supports D1 D2 May 16 00:51:24.208022 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.208091 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.208161 kernel: pci 000c:00:04.0: supports D1 D2 May 16 00:51:24.208225 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.208235 kernel: acpiphp: Slot [1-4] registered May 16 00:51:24.208243 kernel: acpiphp: Slot [2-4] registered May 16 00:51:24.208251 kernel: acpiphp: Slot [3-2] registered May 16 00:51:24.208259 kernel: acpiphp: Slot [4-2] registered May 16 00:51:24.208314 kernel: pci_bus 000c:00: on NUMA node 0 May 16 00:51:24.208377 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.208443 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.208507 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.208571 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.208635 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.208701 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.208766 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.208830 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.208894 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.208958 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.209023 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.209088 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.209155 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 16 00:51:24.209220 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 16 00:51:24.209286 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 16 00:51:24.209352 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 16 00:51:24.209418 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 16 00:51:24.209481 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 16 00:51:24.209547 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 16 00:51:24.209611 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 16 00:51:24.209676 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.209739 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.209804 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.209869 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.209935 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.209999 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210063 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210127 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210194 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210259 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210323 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210387 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210453 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210518 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210581 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210647 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210711 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.210776 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 16 00:51:24.210840 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 16 00:51:24.210907 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.210970 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 16 00:51:24.211036 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 16 00:51:24.211101 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.211168 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 16 00:51:24.211234 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 16 00:51:24.211298 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.211366 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 16 00:51:24.211430 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 16 00:51:24.211490 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 16 00:51:24.211547 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 16 00:51:24.211618 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 16 00:51:24.211677 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 16 00:51:24.211755 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 16 00:51:24.211815 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 16 00:51:24.211883 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 16 00:51:24.211942 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 16 00:51:24.212010 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 16 00:51:24.212069 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 16 00:51:24.212081 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 16 00:51:24.212156 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.212222 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.212284 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.212347 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.212408 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.212474 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 16 00:51:24.212487 kernel: PCI host bridge to bus 0002:00 May 16 00:51:24.212553 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 16 00:51:24.212612 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 16 00:51:24.212668 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 16 00:51:24.212741 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.212812 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.212881 kernel: pci 0002:00:01.0: supports D1 D2 May 16 00:51:24.212948 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213020 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.213085 kernel: pci 0002:00:03.0: supports D1 D2 May 16 00:51:24.213149 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213224 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.213289 kernel: pci 0002:00:05.0: supports D1 D2 May 16 00:51:24.213354 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213429 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 16 00:51:24.213495 kernel: pci 0002:00:07.0: supports D1 D2 May 16 00:51:24.213559 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213569 kernel: acpiphp: Slot [1-5] registered May 16 00:51:24.213577 kernel: acpiphp: Slot [2-5] registered May 16 00:51:24.213585 kernel: acpiphp: Slot [3-3] registered May 16 00:51:24.213592 kernel: acpiphp: Slot [4-3] registered May 16 00:51:24.213649 kernel: pci_bus 0002:00: on NUMA node 0 May 16 00:51:24.213716 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.213780 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.213844 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.213912 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.213977 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.214042 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.214106 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.214174 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.214241 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.214306 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.214371 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.214438 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.214504 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 16 00:51:24.214568 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 16 00:51:24.214633 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 16 00:51:24.214696 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 16 00:51:24.214761 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 16 00:51:24.214824 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 16 00:51:24.214889 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 16 00:51:24.214956 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 16 00:51:24.215021 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215084 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215150 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215217 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215283 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215347 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215411 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215478 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215542 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215607 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215670 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215734 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215798 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215863 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215927 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215991 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.216056 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.216121 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 16 00:51:24.216190 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 16 00:51:24.216256 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 16 00:51:24.216323 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 16 00:51:24.216388 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 16 00:51:24.216456 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 16 00:51:24.216520 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 16 00:51:24.216587 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 16 00:51:24.216651 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 16 00:51:24.216717 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 16 00:51:24.216781 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 16 00:51:24.216844 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 16 00:51:24.216901 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 16 00:51:24.216973 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 16 00:51:24.217034 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 16 00:51:24.217102 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 16 00:51:24.217325 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 16 00:51:24.217410 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 16 00:51:24.217472 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 16 00:51:24.217538 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 16 00:51:24.217601 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 16 00:51:24.217613 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 16 00:51:24.217683 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.217744 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.217807 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.217867 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.217927 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 16 00:51:24.218006 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 16 00:51:24.218017 kernel: PCI host bridge to bus 0001:00 May 16 00:51:24.218082 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 16 00:51:24.218140 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 16 00:51:24.218205 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 16 00:51:24.218276 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.218348 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.218413 kernel: pci 0001:00:01.0: enabling Extended Tags May 16 00:51:24.218476 kernel: pci 0001:00:01.0: supports D1 D2 May 16 00:51:24.218539 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.218610 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.218676 kernel: pci 0001:00:02.0: supports D1 D2 May 16 00:51:24.218739 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.218810 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.218873 kernel: pci 0001:00:03.0: supports D1 D2 May 16 00:51:24.218936 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.219005 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.219072 kernel: pci 0001:00:04.0: supports D1 D2 May 16 00:51:24.219136 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.219146 kernel: acpiphp: Slot [1-6] registered May 16 00:51:24.219223 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 16 00:51:24.219289 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 16 00:51:24.219353 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 16 00:51:24.219418 kernel: pci 0001:01:00.0: PME# supported from D3cold May 16 00:51:24.219485 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 16 00:51:24.219557 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 16 00:51:24.219622 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 16 00:51:24.219688 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 16 00:51:24.219752 kernel: pci 0001:01:00.1: PME# supported from D3cold May 16 00:51:24.219762 kernel: acpiphp: Slot [2-6] registered May 16 00:51:24.219770 kernel: acpiphp: Slot [3-4] registered May 16 00:51:24.219778 kernel: acpiphp: Slot [4-4] registered May 16 00:51:24.219835 kernel: pci_bus 0001:00: on NUMA node 0 May 16 00:51:24.219899 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.219965 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.220028 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.220091 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.220283 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.220362 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.220430 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.220496 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.220560 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.220623 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.220687 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 16 00:51:24.220752 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 16 00:51:24.220815 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 16 00:51:24.220880 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 16 00:51:24.220952 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 16 00:51:24.221016 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 16 00:51:24.221078 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 16 00:51:24.221142 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 16 00:51:24.221214 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221279 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221341 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221407 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221470 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221533 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221596 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221659 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221722 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221785 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221847 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221911 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221974 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.222036 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.222101 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.222168 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.222235 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 16 00:51:24.222302 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 16 00:51:24.222367 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 16 00:51:24.222434 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 16 00:51:24.222497 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.222560 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 16 00:51:24.222623 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 16 00:51:24.222687 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.222749 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 16 00:51:24.222812 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 16 00:51:24.222877 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.222940 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 16 00:51:24.223003 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 16 00:51:24.223067 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.223130 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 16 00:51:24.223198 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 16 00:51:24.223259 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 16 00:51:24.223316 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 16 00:51:24.223393 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 16 00:51:24.223452 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 16 00:51:24.223518 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 16 00:51:24.223577 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 16 00:51:24.223644 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 16 00:51:24.223703 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 16 00:51:24.223768 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 16 00:51:24.223826 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 16 00:51:24.223836 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 16 00:51:24.223904 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.223968 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.224029 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.224089 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.224150 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 16 00:51:24.224434 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 16 00:51:24.224446 kernel: PCI host bridge to bus 0004:00 May 16 00:51:24.224512 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 16 00:51:24.224573 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 16 00:51:24.224628 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 16 00:51:24.224698 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.224768 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.224832 kernel: pci 0004:00:01.0: supports D1 D2 May 16 00:51:24.224894 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.224963 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.225032 kernel: pci 0004:00:03.0: supports D1 D2 May 16 00:51:24.225094 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.225168 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.225233 kernel: pci 0004:00:05.0: supports D1 D2 May 16 00:51:24.225295 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.225368 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 16 00:51:24.225434 kernel: pci 0004:01:00.0: enabling Extended Tags May 16 00:51:24.225502 kernel: pci 0004:01:00.0: supports D1 D2 May 16 00:51:24.225567 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 16 00:51:24.225645 kernel: pci_bus 0004:02: extended config space not accessible May 16 00:51:24.225719 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 16 00:51:24.225787 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 16 00:51:24.225855 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 16 00:51:24.225922 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 16 00:51:24.225992 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 16 00:51:24.226059 kernel: pci 0004:02:00.0: supports D1 D2 May 16 00:51:24.226126 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 16 00:51:24.226204 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 16 00:51:24.226273 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 16 00:51:24.226338 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 16 00:51:24.226395 kernel: pci_bus 0004:00: on NUMA node 0 May 16 00:51:24.226461 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 16 00:51:24.226525 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.226589 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.226651 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 16 00:51:24.226715 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.226778 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.226841 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.226907 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 16 00:51:24.226970 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 16 00:51:24.227034 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 16 00:51:24.227097 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 16 00:51:24.227163 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 16 00:51:24.227226 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 16 00:51:24.227290 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227353 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227418 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227481 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227543 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227605 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227668 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227731 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227795 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227857 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227922 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227984 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.228050 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 16 00:51:24.228116 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.228184 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.228254 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 16 00:51:24.228321 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 16 00:51:24.228388 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 16 00:51:24.228458 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 16 00:51:24.228523 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 16 00:51:24.228587 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 16 00:51:24.228650 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 16 00:51:24.228714 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 16 00:51:24.228776 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 16 00:51:24.228843 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 16 00:51:24.228906 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.228971 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 16 00:51:24.229034 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 16 00:51:24.229098 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 16 00:51:24.229163 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 16 00:51:24.229227 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 16 00:51:24.229285 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 16 00:51:24.229346 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 16 00:51:24.229402 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 16 00:51:24.229469 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 16 00:51:24.229529 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 16 00:51:24.229591 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 16 00:51:24.229658 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 16 00:51:24.229716 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 16 00:51:24.229783 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 16 00:51:24.229841 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 16 00:51:24.229852 kernel: iommu: Default domain type: Translated May 16 00:51:24.229860 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:51:24.229868 kernel: efivars: Registered efivars operations May 16 00:51:24.229934 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 16 00:51:24.230004 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 16 00:51:24.230072 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 16 00:51:24.230085 kernel: vgaarb: loaded May 16 00:51:24.230093 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:51:24.230101 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:51:24.230109 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:51:24.230117 kernel: pnp: PnP ACPI init May 16 00:51:24.230189 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 16 00:51:24.230248 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 16 00:51:24.230308 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 16 00:51:24.230365 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 16 00:51:24.230423 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 16 00:51:24.230479 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 16 00:51:24.230537 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 16 00:51:24.230594 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 16 00:51:24.230604 kernel: pnp: PnP ACPI: found 1 devices May 16 00:51:24.230615 kernel: NET: Registered PF_INET protocol family May 16 00:51:24.230623 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230631 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 16 00:51:24.230639 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:51:24.230647 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:51:24.230655 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230664 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 16 00:51:24.230672 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230681 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:51:24.230756 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 16 00:51:24.230767 kernel: kvm [1]: IPA Size Limit: 48 bits May 16 00:51:24.230775 kernel: kvm [1]: GICv3: no GICV resource entry May 16 00:51:24.230783 kernel: kvm [1]: disabling GICv2 emulation May 16 00:51:24.230791 kernel: kvm [1]: GIC system register CPU interface enabled May 16 00:51:24.230799 kernel: kvm [1]: vgic interrupt IRQ9 May 16 00:51:24.230807 kernel: kvm [1]: VHE mode initialized successfully May 16 00:51:24.230817 kernel: Initialise system trusted keyrings May 16 00:51:24.230825 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 16 00:51:24.230833 kernel: Key type asymmetric registered May 16 00:51:24.230840 kernel: Asymmetric key parser 'x509' registered May 16 00:51:24.230848 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 00:51:24.230856 kernel: io scheduler mq-deadline registered May 16 00:51:24.230864 kernel: io scheduler kyber registered May 16 00:51:24.230872 kernel: io scheduler bfq registered May 16 00:51:24.230880 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:51:24.230888 kernel: ACPI: button: Power Button [PWRB] May 16 00:51:24.230897 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 16 00:51:24.230905 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:51:24.230975 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 16 00:51:24.231036 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.231096 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.231158 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 16 00:51:24.231219 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 16 00:51:24.231279 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 16 00:51:24.231346 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 16 00:51:24.231405 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.231464 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.231522 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 16 00:51:24.231580 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 16 00:51:24.231640 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 16 00:51:24.231706 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 16 00:51:24.231764 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.231823 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.231881 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 16 00:51:24.231942 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 16 00:51:24.232000 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 16 00:51:24.232069 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 16 00:51:24.232128 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.232191 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.232250 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 16 00:51:24.232308 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 16 00:51:24.232367 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 16 00:51:24.232440 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 16 00:51:24.232502 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.232560 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.232619 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 16 00:51:24.232676 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 16 00:51:24.232735 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 16 00:51:24.232800 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 16 00:51:24.232862 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.232920 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.232979 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 16 00:51:24.233036 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 16 00:51:24.233095 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 16 00:51:24.233167 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 16 00:51:24.233229 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.233288 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.233347 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 16 00:51:24.233407 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 16 00:51:24.233465 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 16 00:51:24.233529 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 16 00:51:24.233590 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.233649 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.233707 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 16 00:51:24.233766 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 16 00:51:24.233826 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 16 00:51:24.233837 kernel: thunder_xcv, ver 1.0 May 16 00:51:24.233845 kernel: thunder_bgx, ver 1.0 May 16 00:51:24.233853 kernel: nicpf, ver 1.0 May 16 00:51:24.233863 kernel: nicvf, ver 1.0 May 16 00:51:24.233927 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:51:24.233986 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:51:22 UTC (1747356682) May 16 00:51:24.233997 kernel: efifb: probing for efifb May 16 00:51:24.234005 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 16 00:51:24.234013 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 16 00:51:24.234021 kernel: efifb: scrolling: redraw May 16 00:51:24.234029 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:51:24.234039 kernel: Console: switching to colour frame buffer device 100x37 May 16 00:51:24.234047 kernel: fb0: EFI VGA frame buffer device May 16 00:51:24.234055 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 16 00:51:24.234063 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:51:24.234072 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:51:24.234079 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 16 00:51:24.234088 kernel: watchdog: Hard watchdog permanently disabled May 16 00:51:24.234096 kernel: NET: Registered PF_INET6 protocol family May 16 00:51:24.234103 kernel: Segment Routing with IPv6 May 16 00:51:24.234113 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:51:24.234120 kernel: NET: Registered PF_PACKET protocol family May 16 00:51:24.234128 kernel: Key type dns_resolver registered May 16 00:51:24.234136 kernel: registered taskstats version 1 May 16 00:51:24.234144 kernel: Loading compiled-in X.509 certificates May 16 00:51:24.234156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: c5ee9c587519d4ef57ff0de9630e786a4c7faded' May 16 00:51:24.234164 kernel: Key type .fscrypt registered May 16 00:51:24.234172 kernel: Key type fscrypt-provisioning registered May 16 00:51:24.234179 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:51:24.234190 kernel: ima: Allocated hash algorithm: sha1 May 16 00:51:24.234198 kernel: ima: No architecture policies found May 16 00:51:24.234206 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:51:24.234273 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 16 00:51:24.234338 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 16 00:51:24.234403 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 16 00:51:24.234467 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 16 00:51:24.234532 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 16 00:51:24.234596 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 16 00:51:24.234662 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 16 00:51:24.234726 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 16 00:51:24.234791 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 16 00:51:24.234855 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 16 00:51:24.234919 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 16 00:51:24.234982 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 16 00:51:24.235047 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 16 00:51:24.235111 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 16 00:51:24.235181 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 16 00:51:24.235246 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 16 00:51:24.235311 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 16 00:51:24.235375 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 16 00:51:24.235440 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 16 00:51:24.235503 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 16 00:51:24.235568 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 16 00:51:24.235630 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 16 00:51:24.235697 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 16 00:51:24.235760 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 16 00:51:24.235825 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 16 00:51:24.235888 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 16 00:51:24.235952 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 16 00:51:24.236015 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 16 00:51:24.236080 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 16 00:51:24.236144 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 16 00:51:24.236213 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 16 00:51:24.236279 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 16 00:51:24.236344 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 16 00:51:24.236408 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 16 00:51:24.236472 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 16 00:51:24.236536 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 16 00:51:24.236599 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 16 00:51:24.236662 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 16 00:51:24.236727 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 16 00:51:24.236793 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 16 00:51:24.236856 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 16 00:51:24.236920 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 16 00:51:24.236984 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 16 00:51:24.237048 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 16 00:51:24.237112 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 16 00:51:24.237179 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 16 00:51:24.237244 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 16 00:51:24.237311 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 16 00:51:24.237374 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 16 00:51:24.237437 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 16 00:51:24.237501 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 16 00:51:24.237564 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 16 00:51:24.237629 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 16 00:51:24.237691 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 16 00:51:24.237756 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 16 00:51:24.237822 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 16 00:51:24.237887 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 16 00:51:24.237951 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 16 00:51:24.238017 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 16 00:51:24.238080 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 16 00:51:24.238147 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 16 00:51:24.238161 kernel: clk: Disabling unused clocks May 16 00:51:24.238169 kernel: Freeing unused kernel memory: 39744K May 16 00:51:24.238179 kernel: Run /init as init process May 16 00:51:24.238187 kernel: with arguments: May 16 00:51:24.238195 kernel: /init May 16 00:51:24.238203 kernel: with environment: May 16 00:51:24.238210 kernel: HOME=/ May 16 00:51:24.238218 kernel: TERM=linux May 16 00:51:24.238225 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:51:24.238235 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:51:24.238247 systemd[1]: Detected architecture arm64. May 16 00:51:24.238256 systemd[1]: Running in initrd. May 16 00:51:24.238264 systemd[1]: No hostname configured, using default hostname. May 16 00:51:24.238272 systemd[1]: Hostname set to . May 16 00:51:24.238280 systemd[1]: Initializing machine ID from random generator. May 16 00:51:24.238289 systemd[1]: Queued start job for default target initrd.target. May 16 00:51:24.238297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:51:24.238306 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:51:24.238316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:51:24.238325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:51:24.238333 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:51:24.238342 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:51:24.238352 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:51:24.238361 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:51:24.238369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:51:24.238379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:51:24.238387 systemd[1]: Reached target paths.target - Path Units. May 16 00:51:24.238396 systemd[1]: Reached target slices.target - Slice Units. May 16 00:51:24.238404 systemd[1]: Reached target swap.target - Swaps. May 16 00:51:24.238412 systemd[1]: Reached target timers.target - Timer Units. May 16 00:51:24.238420 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:51:24.238429 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:51:24.238437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:51:24.238447 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:51:24.238455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:51:24.238463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:51:24.238472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:51:24.238480 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:51:24.238488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:51:24.238497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:51:24.238505 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:51:24.238513 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:51:24.238523 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:51:24.238531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:51:24.238561 systemd-journald[899]: Collecting audit messages is disabled. May 16 00:51:24.238582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:24.238592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:51:24.238600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:51:24.238608 kernel: Bridge firewalling registered May 16 00:51:24.238617 systemd-journald[899]: Journal started May 16 00:51:24.238636 systemd-journald[899]: Runtime Journal (/run/log/journal/ce0abba997da429ea636b185849751e9) is 8.0M, max 4.0G, 3.9G free. May 16 00:51:24.197418 systemd-modules-load[901]: Inserted module 'overlay' May 16 00:51:24.273309 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:51:24.221151 systemd-modules-load[901]: Inserted module 'br_netfilter' May 16 00:51:24.279867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:51:24.291474 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:51:24.302190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:51:24.313515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:24.340359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:51:24.357875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:51:24.365460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:51:24.377019 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:51:24.393130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:24.409446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:51:24.426360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:51:24.437779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:51:24.467308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:51:24.480593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:51:24.487354 dracut-cmdline[945]: dracut-dracut-053 May 16 00:51:24.500605 dracut-cmdline[945]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 16 00:51:24.494744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:51:24.508936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:51:24.517039 systemd-resolved[955]: Positive Trust Anchors: May 16 00:51:24.517341 systemd-resolved[955]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:51:24.517373 systemd-resolved[955]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:51:24.532432 systemd-resolved[955]: Defaulting to hostname 'linux'. May 16 00:51:24.545947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:51:24.661617 kernel: SCSI subsystem initialized May 16 00:51:24.565576 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:51:24.677430 kernel: Loading iSCSI transport class v2.0-870. May 16 00:51:24.691159 kernel: iscsi: registered transport (tcp) May 16 00:51:24.718265 kernel: iscsi: registered transport (qla4xxx) May 16 00:51:24.718295 kernel: QLogic iSCSI HBA Driver May 16 00:51:24.761707 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:51:24.786353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:51:24.831448 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:51:24.831478 kernel: device-mapper: uevent: version 1.0.3 May 16 00:51:24.841214 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:51:24.907164 kernel: raid6: neonx8 gen() 15839 MB/s May 16 00:51:24.933163 kernel: raid6: neonx4 gen() 15716 MB/s May 16 00:51:24.959162 kernel: raid6: neonx2 gen() 13456 MB/s May 16 00:51:24.984163 kernel: raid6: neonx1 gen() 10523 MB/s May 16 00:51:25.009163 kernel: raid6: int64x8 gen() 6984 MB/s May 16 00:51:25.034163 kernel: raid6: int64x4 gen() 7384 MB/s May 16 00:51:25.059163 kernel: raid6: int64x2 gen() 6153 MB/s May 16 00:51:25.087290 kernel: raid6: int64x1 gen() 5077 MB/s May 16 00:51:25.087311 kernel: raid6: using algorithm neonx8 gen() 15839 MB/s May 16 00:51:25.121692 kernel: raid6: .... xor() 11972 MB/s, rmw enabled May 16 00:51:25.121716 kernel: raid6: using neon recovery algorithm May 16 00:51:25.141163 kernel: xor: measuring software checksum speed May 16 00:51:25.141186 kernel: 8regs : 19617 MB/sec May 16 00:51:25.157159 kernel: 32regs : 19399 MB/sec May 16 00:51:25.168362 kernel: arm64_neon : 26708 MB/sec May 16 00:51:25.168382 kernel: xor: using function: arm64_neon (26708 MB/sec) May 16 00:51:25.229161 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:51:25.238965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:51:25.259283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:51:25.272911 systemd-udevd[1147]: Using default interface naming scheme 'v255'. May 16 00:51:25.275952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:51:25.299301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:51:25.313628 dracut-pre-trigger[1159]: rd.md=0: removing MD RAID activation May 16 00:51:25.339931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:51:25.360326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:51:25.466129 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:51:25.494853 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:51:25.494889 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:51:25.516289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:51:25.561254 kernel: ACPI: bus type USB registered May 16 00:51:25.561268 kernel: usbcore: registered new interface driver usbfs May 16 00:51:25.561278 kernel: usbcore: registered new interface driver hub May 16 00:51:25.561288 kernel: usbcore: registered new device driver usb May 16 00:51:25.561297 kernel: PTP clock support registered May 16 00:51:25.556497 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:51:25.718070 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 16 00:51:25.718083 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 16 00:51:25.718092 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 16 00:51:25.718250 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 16 00:51:25.718345 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 16 00:51:25.718424 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 16 00:51:25.718503 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 16 00:51:25.718579 kernel: nvme 0005:03:00.0: Adding to iommu group 33 May 16 00:51:25.718667 kernel: igb 0003:03:00.0: added PHC on eth0 May 16 00:51:25.718746 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 34 May 16 00:51:25.718831 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 16 00:51:25.718907 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 16 00:51:25.718990 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c4 May 16 00:51:25.715860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:51:25.723776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:51:25.740257 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:51:25.791802 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 16 00:51:25.791942 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 16 00:51:25.792029 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 16 00:51:25.768365 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:51:25.808772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:51:25.808867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:25.825495 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:51:25.836480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:51:25.836525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:25.853716 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:25.876260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:25.885915 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:51:26.070920 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 16 00:51:26.071132 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 16 00:51:26.071220 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 16 00:51:26.071297 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 16 00:51:26.071372 kernel: nvme nvme0: pci function 0005:03:00.0 May 16 00:51:26.071466 kernel: hub 1-0:1.0: USB hub found May 16 00:51:26.071562 kernel: hub 1-0:1.0: 4 ports detected May 16 00:51:26.071641 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 16 00:51:26.071732 kernel: hub 2-0:1.0: USB hub found May 16 00:51:26.071816 kernel: hub 2-0:1.0: 4 ports detected May 16 00:51:26.071893 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 16 00:51:26.071981 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 16 00:51:26.072062 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 16 00:51:26.072133 kernel: nvme nvme1: pci function 0005:04:00.0 May 16 00:51:26.081431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:26.110268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:51:26.125681 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 16 00:51:26.149528 kernel: igb 0003:03:00.1: added PHC on eth1 May 16 00:51:26.149715 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 16 00:51:26.161267 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c5 May 16 00:51:26.173229 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 16 00:51:26.183119 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 16 00:51:26.204158 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 16 00:51:26.204657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:26.282654 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:51:26.282669 kernel: GPT:9289727 != 1875385007 May 16 00:51:26.282678 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:51:26.282687 kernel: GPT:9289727 != 1875385007 May 16 00:51:26.282696 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:51:26.282706 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:51:26.282715 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 16 00:51:26.284159 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 16 00:51:26.308164 kernel: BTRFS: device fsid 462ff9f1-7a02-4839-b355-edf30dab0598 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (1209) May 16 00:51:26.308179 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 16 00:51:26.308278 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1238) May 16 00:51:26.312408 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 16 00:51:26.369944 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 16 00:51:26.379255 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 16 00:51:26.401365 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 16 00:51:26.417232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 16 00:51:26.422520 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 16 00:51:26.442220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 16 00:51:26.465262 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:51:26.491892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:51:26.491907 disk-uuid[1310]: Primary Header is updated. May 16 00:51:26.491907 disk-uuid[1310]: Secondary Entries is updated. May 16 00:51:26.491907 disk-uuid[1310]: Secondary Header is updated. May 16 00:51:26.526680 kernel: hub 1-3:1.0: USB hub found May 16 00:51:26.526835 kernel: hub 1-3:1.0: 4 ports detected May 16 00:51:26.619164 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 16 00:51:26.654269 kernel: hub 2-3:1.0: USB hub found May 16 00:51:26.654477 kernel: hub 2-3:1.0: 4 ports detected May 16 00:51:26.699165 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 16 00:51:26.712158 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 16 00:51:26.735266 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 16 00:51:26.735416 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 16 00:51:27.081314 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 16 00:51:27.387164 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 16 00:51:27.402161 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 16 00:51:27.423160 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 16 00:51:27.491169 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:51:27.491554 disk-uuid[1311]: The operation has completed successfully. May 16 00:51:27.516997 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:51:27.517081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:51:27.549297 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:51:27.554317 sh[1482]: Success May 16 00:51:27.578159 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:51:27.609947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:51:27.629388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:51:27.639731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:51:27.645158 kernel: BTRFS info (device dm-0): first mount of filesystem 462ff9f1-7a02-4839-b355-edf30dab0598 May 16 00:51:27.645175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:27.645185 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:51:27.645196 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:51:27.645205 kernel: BTRFS info (device dm-0): using free space tree May 16 00:51:27.649157 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 16 00:51:27.730747 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:51:27.737353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:51:27.755257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:51:27.761186 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:51:27.871800 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:27.871821 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:27.871831 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:51:27.871841 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:51:27.871851 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 16 00:51:27.871860 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:27.868139 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:51:27.893352 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:51:27.903942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:51:27.930293 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:51:27.950741 systemd-networkd[1683]: lo: Link UP May 16 00:51:27.950747 systemd-networkd[1683]: lo: Gained carrier May 16 00:51:27.954560 systemd-networkd[1683]: Enumeration completed May 16 00:51:27.954675 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:51:27.956012 systemd-networkd[1683]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:27.961477 systemd[1]: Reached target network.target - Network. May 16 00:51:27.982111 ignition[1670]: Ignition 2.20.0 May 16 00:51:27.992953 unknown[1670]: fetched base config from "system" May 16 00:51:27.982117 ignition[1670]: Stage: fetch-offline May 16 00:51:27.992961 unknown[1670]: fetched user config from "system" May 16 00:51:27.982196 ignition[1670]: no configs at "/usr/lib/ignition/base.d" May 16 00:51:27.995485 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:51:27.982204 ignition[1670]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:28.003559 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:51:27.982514 ignition[1670]: parsed url from cmdline: "" May 16 00:51:28.007597 systemd-networkd[1683]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:27.982517 ignition[1670]: no config URL provided May 16 00:51:28.018305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:51:27.982522 ignition[1670]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:51:28.061034 systemd-networkd[1683]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:27.982574 ignition[1670]: parsing config with SHA512: 9719527db92e4fcbc8b37b4802d2fb6285d108097370fec6c5cb007d1b3dfcddaf3c7b2caeb54f200264e71629d8b23850fc2e94bee5aef46319b741df576c24 May 16 00:51:27.993410 ignition[1670]: fetch-offline: fetch-offline passed May 16 00:51:27.993415 ignition[1670]: POST message to Packet Timeline May 16 00:51:27.993420 ignition[1670]: POST Status error: resource requires networking May 16 00:51:27.993491 ignition[1670]: Ignition finished successfully May 16 00:51:28.031768 ignition[1709]: Ignition 2.20.0 May 16 00:51:28.031773 ignition[1709]: Stage: kargs May 16 00:51:28.032001 ignition[1709]: no configs at "/usr/lib/ignition/base.d" May 16 00:51:28.032010 ignition[1709]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:28.033466 ignition[1709]: kargs: kargs passed May 16 00:51:28.033486 ignition[1709]: POST message to Packet Timeline May 16 00:51:28.033708 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:28.036944 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43214->[::1]:53: read: connection refused May 16 00:51:28.237186 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #2 May 16 00:51:28.237865 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53035->[::1]:53: read: connection refused May 16 00:51:28.640821 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #3 May 16 00:51:28.641453 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51312->[::1]:53: read: connection refused May 16 00:51:28.664530 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 16 00:51:28.656331 systemd-networkd[1683]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:29.279166 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 16 00:51:29.282207 systemd-networkd[1683]: eno1: Link UP May 16 00:51:29.282409 systemd-networkd[1683]: eno2: Link UP May 16 00:51:29.282534 systemd-networkd[1683]: enP1p1s0f0np0: Link UP May 16 00:51:29.282678 systemd-networkd[1683]: enP1p1s0f0np0: Gained carrier May 16 00:51:29.293396 systemd-networkd[1683]: enP1p1s0f1np1: Link UP May 16 00:51:29.333183 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv4 address 147.28.151.230/30, gateway 147.28.151.229 acquired from 147.28.144.140 May 16 00:51:29.442435 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #4 May 16 00:51:29.442873 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52608->[::1]:53: read: connection refused May 16 00:51:29.659636 systemd-networkd[1683]: enP1p1s0f1np1: Gained carrier May 16 00:51:30.595239 systemd-networkd[1683]: enP1p1s0f0np0: Gained IPv6LL May 16 00:51:30.787215 systemd-networkd[1683]: enP1p1s0f1np1: Gained IPv6LL May 16 00:51:31.043602 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #5 May 16 00:51:31.044058 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56469->[::1]:53: read: connection refused May 16 00:51:34.246515 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #6 May 16 00:51:34.896072 ignition[1709]: GET result: OK May 16 00:51:35.197573 ignition[1709]: Ignition finished successfully May 16 00:51:35.201321 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:51:35.215373 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:51:35.226448 ignition[1726]: Ignition 2.20.0 May 16 00:51:35.226455 ignition[1726]: Stage: disks May 16 00:51:35.226615 ignition[1726]: no configs at "/usr/lib/ignition/base.d" May 16 00:51:35.226624 ignition[1726]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:35.227539 ignition[1726]: disks: disks passed May 16 00:51:35.227544 ignition[1726]: POST message to Packet Timeline May 16 00:51:35.227564 ignition[1726]: GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:35.700444 ignition[1726]: GET result: OK May 16 00:51:35.986279 ignition[1726]: Ignition finished successfully May 16 00:51:35.988169 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:51:35.994625 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:51:36.002170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:51:36.010157 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:51:36.018592 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:51:36.027418 systemd[1]: Reached target basic.target - Basic System. May 16 00:51:36.051255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:51:36.066524 systemd-fsck[1745]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:51:36.070126 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:51:36.092253 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:51:36.160177 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 759e3456-2e58-4307-81e1-19f20d3141c2 r/w with ordered data mode. Quota mode: none. May 16 00:51:36.160509 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:51:36.170698 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:51:36.192233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:51:36.200158 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1760) May 16 00:51:36.200176 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:36.200187 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:36.200196 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:51:36.201157 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:51:36.201169 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 16 00:51:36.293212 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:51:36.299592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 16 00:51:36.311200 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 16 00:51:36.325871 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:51:36.325940 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:51:36.358047 coreos-metadata[1779]: May 16 00:51:36.354 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 16 00:51:36.374616 coreos-metadata[1778]: May 16 00:51:36.354 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 16 00:51:36.338736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:51:36.352613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:51:36.374331 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:51:36.407525 initrd-setup-root[1798]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:51:36.413572 initrd-setup-root[1805]: cut: /sysroot/etc/group: No such file or directory May 16 00:51:36.419894 initrd-setup-root[1813]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:51:36.426232 initrd-setup-root[1820]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:51:36.495312 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:51:36.515239 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:51:36.523160 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:36.546241 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:51:36.552629 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:51:36.571687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:51:36.577037 ignition[1894]: INFO : Ignition 2.20.0 May 16 00:51:36.577037 ignition[1894]: INFO : Stage: mount May 16 00:51:36.577037 ignition[1894]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:51:36.577037 ignition[1894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:36.577037 ignition[1894]: INFO : mount: mount passed May 16 00:51:36.577037 ignition[1894]: INFO : POST message to Packet Timeline May 16 00:51:36.577037 ignition[1894]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:36.857729 coreos-metadata[1778]: May 16 00:51:36.857 INFO Fetch successful May 16 00:51:36.904405 coreos-metadata[1778]: May 16 00:51:36.904 INFO wrote hostname ci-4152.2.3-n-16e7659192 to /sysroot/etc/hostname May 16 00:51:36.907548 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 16 00:51:37.086804 ignition[1894]: INFO : GET result: OK May 16 00:51:37.136987 coreos-metadata[1779]: May 16 00:51:37.136 INFO Fetch successful May 16 00:51:37.184920 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 16 00:51:37.185064 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 16 00:51:37.420394 ignition[1894]: INFO : Ignition finished successfully May 16 00:51:37.422684 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:51:37.440256 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:51:37.452298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:51:37.487645 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1924) May 16 00:51:37.487679 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:37.501930 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:37.514845 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:51:37.537573 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:51:37.537598 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 16 00:51:37.545593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:51:37.574768 ignition[1941]: INFO : Ignition 2.20.0 May 16 00:51:37.574768 ignition[1941]: INFO : Stage: files May 16 00:51:37.584209 ignition[1941]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:51:37.584209 ignition[1941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:37.584209 ignition[1941]: DEBUG : files: compiled without relabeling support, skipping May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:51:37.584209 ignition[1941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 00:51:37.584209 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 16 00:51:37.580356 unknown[1941]: wrote ssh authorized keys file for user: core May 16 00:51:37.704162 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:51:37.960889 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 16 00:51:38.484132 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 00:51:38.805826 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:51:38.818394 ignition[1941]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:51:38.818394 ignition[1941]: INFO : files: files passed May 16 00:51:38.818394 ignition[1941]: INFO : POST message to Packet Timeline May 16 00:51:38.818394 ignition[1941]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:39.414507 ignition[1941]: INFO : GET result: OK May 16 00:51:40.607163 ignition[1941]: INFO : Ignition finished successfully May 16 00:51:40.610342 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:51:40.628282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:51:40.634928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:51:40.646670 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:51:40.646746 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:51:40.664941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:51:40.698950 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:51:40.698950 initrd-setup-root-after-ignition[1981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:51:40.676934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:51:40.733686 initrd-setup-root-after-ignition[1985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:51:40.697414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:51:40.730417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:51:40.730499 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:51:40.739588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:51:40.756249 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:51:40.772898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:51:40.784289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:51:40.811353 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:51:40.835326 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:51:40.849671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:51:40.858615 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:51:40.869807 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:51:40.881062 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:51:40.881160 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:51:40.892429 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:51:40.903256 systemd[1]: Stopped target basic.target - Basic System. May 16 00:51:40.914334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:51:40.925395 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:51:40.936332 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:51:40.947301 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:51:40.958240 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:51:40.969210 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:51:40.980189 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:51:40.996556 systemd[1]: Stopped target swap.target - Swaps. May 16 00:51:41.007612 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:51:41.007701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:51:41.018901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:51:41.029796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:51:41.040928 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:51:41.044191 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:51:41.052077 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:51:41.052169 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:51:41.063353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:51:41.063436 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:51:41.074559 systemd[1]: Stopped target paths.target - Path Units. May 16 00:51:41.085545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:51:41.089173 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:51:41.102433 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:51:41.113759 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:51:41.125121 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:51:41.125210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:51:41.223276 ignition[2010]: INFO : Ignition 2.20.0 May 16 00:51:41.223276 ignition[2010]: INFO : Stage: umount May 16 00:51:41.223276 ignition[2010]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:51:41.223276 ignition[2010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:41.223276 ignition[2010]: INFO : umount: umount passed May 16 00:51:41.223276 ignition[2010]: INFO : POST message to Packet Timeline May 16 00:51:41.223276 ignition[2010]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:41.136544 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:51:41.136646 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:51:41.148067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:51:41.148159 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:51:41.159582 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:51:41.159661 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:51:41.171020 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 16 00:51:41.171102 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 16 00:51:41.197277 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:51:41.205512 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:51:41.205611 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:51:41.218109 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:51:41.229228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:51:41.229332 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:51:41.240829 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:51:41.240909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:51:41.254274 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:51:41.255043 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:51:41.255127 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:51:41.264793 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:51:41.264865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:51:41.782622 ignition[2010]: INFO : GET result: OK May 16 00:51:42.070441 ignition[2010]: INFO : Ignition finished successfully May 16 00:51:42.072664 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:51:42.072850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:51:42.080567 systemd[1]: Stopped target network.target - Network. May 16 00:51:42.089814 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:51:42.089874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:51:42.099536 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:51:42.099568 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:51:42.108940 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:51:42.108972 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:51:42.118291 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:51:42.118336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:51:42.127875 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:51:42.127905 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:51:42.137616 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:51:42.143175 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv6 lease lost May 16 00:51:42.147091 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:51:42.153266 systemd-networkd[1683]: enP1p1s0f1np1: DHCPv6 lease lost May 16 00:51:42.156883 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:51:42.156981 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:51:42.169213 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:51:42.169403 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:51:42.178076 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:51:42.178285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:51:42.198297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:51:42.205394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:51:42.205442 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:51:42.215224 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:51:42.215258 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:51:42.224998 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:51:42.225027 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:51:42.235215 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:51:42.235245 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:51:42.245467 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:51:42.264430 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:51:42.264563 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:51:42.273701 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:51:42.273873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:51:42.282500 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:51:42.282523 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:51:42.293034 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:51:42.293072 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:51:42.303924 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:51:42.303974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:51:42.319494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:51:42.319545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:42.336250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:51:42.346989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:51:42.347035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:51:42.357999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:51:42.358031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:42.369487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:51:42.369554 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:51:42.924525 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:51:42.924686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:51:42.935835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:51:42.956342 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:51:42.969812 systemd[1]: Switching root. May 16 00:51:43.024985 systemd-journald[899]: Journal stopped May 16 00:51:24.177693 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 16 00:51:24.177716 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 22:19:24 -00 2025 May 16 00:51:24.177724 kernel: KASLR enabled May 16 00:51:24.177730 kernel: efi: EFI v2.7 by American Megatrends May 16 00:51:24.177735 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea451818 RNG=0xebf10018 MEMRESERVE=0xe4632f98 May 16 00:51:24.177741 kernel: random: crng init done May 16 00:51:24.177748 kernel: secureboot: Secure boot disabled May 16 00:51:24.177754 kernel: esrt: Reserving ESRT space from 0x00000000ea451818 to 0x00000000ea451878. May 16 00:51:24.177761 kernel: ACPI: Early table checksum verification disabled May 16 00:51:24.177767 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 16 00:51:24.177773 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 16 00:51:24.177779 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 16 00:51:24.177785 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 16 00:51:24.177791 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 16 00:51:24.177799 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 16 00:51:24.177805 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 16 00:51:24.177812 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 16 00:51:24.177818 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 16 00:51:24.177824 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 16 00:51:24.177830 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 16 00:51:24.177837 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177843 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177849 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177855 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 16 00:51:24.177863 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 16 00:51:24.177869 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 16 00:51:24.177875 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 16 00:51:24.177881 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 16 00:51:24.177887 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 16 00:51:24.177894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 16 00:51:24.177900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 16 00:51:24.177906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 16 00:51:24.177912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 16 00:51:24.177918 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 16 00:51:24.177924 kernel: Zone ranges: May 16 00:51:24.177932 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 16 00:51:24.177938 kernel: DMA32 empty May 16 00:51:24.177944 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 16 00:51:24.177950 kernel: Movable zone start for each node May 16 00:51:24.177957 kernel: Early memory node ranges May 16 00:51:24.177965 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 16 00:51:24.177972 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 16 00:51:24.177980 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 16 00:51:24.177987 kernel: node 0: [mem 0x0000000094000000-0x00000000eba31fff] May 16 00:51:24.177993 kernel: node 0: [mem 0x00000000eba32000-0x00000000ebea8fff] May 16 00:51:24.178000 kernel: node 0: [mem 0x00000000ebea9000-0x00000000ebeaefff] May 16 00:51:24.178006 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 16 00:51:24.178013 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 16 00:51:24.178019 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 16 00:51:24.178026 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 16 00:51:24.178032 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 16 00:51:24.178039 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 16 00:51:24.178047 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 16 00:51:24.178054 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 16 00:51:24.178060 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 16 00:51:24.178066 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 16 00:51:24.178073 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 16 00:51:24.178080 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 16 00:51:24.178086 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 16 00:51:24.178092 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 16 00:51:24.178099 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 16 00:51:24.178106 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 16 00:51:24.178112 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 16 00:51:24.178120 kernel: psci: probing for conduit method from ACPI. May 16 00:51:24.178127 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:51:24.178133 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:51:24.178140 kernel: psci: MIGRATE_INFO_TYPE not supported. May 16 00:51:24.178146 kernel: psci: SMC Calling Convention v1.2 May 16 00:51:24.178156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 16 00:51:24.178163 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 16 00:51:24.178170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 16 00:51:24.178177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 16 00:51:24.178183 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 16 00:51:24.178190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 16 00:51:24.178196 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 16 00:51:24.178204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 16 00:51:24.178211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 16 00:51:24.178217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 16 00:51:24.178224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 16 00:51:24.178231 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 16 00:51:24.178237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 16 00:51:24.178244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 16 00:51:24.178250 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 16 00:51:24.178257 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 16 00:51:24.178263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 16 00:51:24.178270 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 16 00:51:24.178276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 16 00:51:24.178284 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 16 00:51:24.178291 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 16 00:51:24.178297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 16 00:51:24.178304 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 16 00:51:24.178310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 16 00:51:24.178317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 16 00:51:24.178323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 16 00:51:24.178330 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 16 00:51:24.178336 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 16 00:51:24.178343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 16 00:51:24.178349 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 16 00:51:24.178357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 16 00:51:24.178364 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 16 00:51:24.178371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 16 00:51:24.178377 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 16 00:51:24.178384 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 16 00:51:24.178390 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 16 00:51:24.178397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 16 00:51:24.178404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 16 00:51:24.178410 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 16 00:51:24.178417 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 16 00:51:24.178423 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 16 00:51:24.178430 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 16 00:51:24.178437 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 16 00:51:24.178444 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 16 00:51:24.178450 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 16 00:51:24.178457 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 16 00:51:24.178463 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 16 00:51:24.178470 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 16 00:51:24.178476 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 16 00:51:24.178483 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 16 00:51:24.178497 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 16 00:51:24.178504 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 16 00:51:24.178512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 16 00:51:24.178519 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 16 00:51:24.178527 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 16 00:51:24.178534 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 16 00:51:24.178540 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 16 00:51:24.178547 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 16 00:51:24.178556 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 16 00:51:24.178563 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 16 00:51:24.178570 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 16 00:51:24.178576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 16 00:51:24.178584 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 16 00:51:24.178590 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 16 00:51:24.178597 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 16 00:51:24.178604 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 16 00:51:24.178611 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 16 00:51:24.178618 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 16 00:51:24.178625 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 16 00:51:24.178632 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 16 00:51:24.178640 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 16 00:51:24.178647 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 16 00:51:24.178654 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 16 00:51:24.178661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 16 00:51:24.178668 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 16 00:51:24.178675 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 16 00:51:24.178682 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 16 00:51:24.178689 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 16 00:51:24.178696 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 16 00:51:24.178703 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 16 00:51:24.178710 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 16 00:51:24.178718 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 16 00:51:24.178726 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 16 00:51:24.178733 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 16 00:51:24.178740 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 16 00:51:24.178746 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 16 00:51:24.178753 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 16 00:51:24.178760 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 16 00:51:24.178767 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 16 00:51:24.178774 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 16 00:51:24.178781 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 16 00:51:24.178788 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 16 00:51:24.178796 kernel: Detected PIPT I-cache on CPU0 May 16 00:51:24.178803 kernel: CPU features: detected: GIC system register CPU interface May 16 00:51:24.178810 kernel: CPU features: detected: Virtualization Host Extensions May 16 00:51:24.178817 kernel: CPU features: detected: Hardware dirty bit management May 16 00:51:24.178824 kernel: CPU features: detected: Spectre-v4 May 16 00:51:24.178831 kernel: CPU features: detected: Spectre-BHB May 16 00:51:24.178838 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:51:24.178845 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:51:24.178852 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:51:24.178859 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:51:24.178866 kernel: alternatives: applying boot alternatives May 16 00:51:24.178874 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 16 00:51:24.178882 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:51:24.178889 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 16 00:51:24.178896 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 16 00:51:24.178903 kernel: printk: log_buf_len min size: 262144 bytes May 16 00:51:24.178910 kernel: printk: log_buf_len: 1048576 bytes May 16 00:51:24.178917 kernel: printk: early log buf free: 249864(95%) May 16 00:51:24.178924 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 16 00:51:24.178932 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 16 00:51:24.178938 kernel: Fallback order for Node 0: 0 May 16 00:51:24.178946 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 16 00:51:24.178954 kernel: Policy zone: Normal May 16 00:51:24.178961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:51:24.178968 kernel: software IO TLB: area num 128. May 16 00:51:24.178975 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 16 00:51:24.178982 kernel: Memory: 262922200K/268174336K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 5252136K reserved, 0K cma-reserved) May 16 00:51:24.178989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 16 00:51:24.178996 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:51:24.179004 kernel: rcu: RCU event tracing is enabled. May 16 00:51:24.179011 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 16 00:51:24.179019 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:51:24.179026 kernel: Tracing variant of Tasks RCU enabled. May 16 00:51:24.179033 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:51:24.179041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 16 00:51:24.179048 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:51:24.179055 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 16 00:51:24.179062 kernel: GICv3: 672 SPIs implemented May 16 00:51:24.179069 kernel: GICv3: 0 Extended SPIs implemented May 16 00:51:24.179076 kernel: Root IRQ handler: gic_handle_irq May 16 00:51:24.179083 kernel: GICv3: GICv3 features: 16 PPIs May 16 00:51:24.179090 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 16 00:51:24.179096 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 16 00:51:24.179103 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 16 00:51:24.179110 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 16 00:51:24.179117 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 16 00:51:24.179125 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 16 00:51:24.179132 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 16 00:51:24.179139 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 16 00:51:24.179146 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 16 00:51:24.179154 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 16 00:51:24.179162 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179169 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179176 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 16 00:51:24.179184 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179191 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179198 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 16 00:51:24.179206 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179214 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179221 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 16 00:51:24.179228 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179235 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179242 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 16 00:51:24.179249 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179256 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179263 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 16 00:51:24.179270 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179277 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179285 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 16 00:51:24.179292 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179299 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179306 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 16 00:51:24.179313 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:51:24.179320 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 16 00:51:24.179328 kernel: GICv3: using LPI property table @0x00000800003e0000 May 16 00:51:24.179335 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 16 00:51:24.179342 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:51:24.179349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179356 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 16 00:51:24.179364 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 16 00:51:24.179372 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:51:24.179379 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:51:24.179386 kernel: Console: colour dummy device 80x25 May 16 00:51:24.179393 kernel: printk: console [tty0] enabled May 16 00:51:24.179401 kernel: ACPI: Core revision 20230628 May 16 00:51:24.179408 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:51:24.179415 kernel: pid_max: default: 81920 minimum: 640 May 16 00:51:24.179423 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:51:24.179430 kernel: landlock: Up and running. May 16 00:51:24.179438 kernel: SELinux: Initializing. May 16 00:51:24.179445 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:51:24.179453 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:51:24.179460 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 16 00:51:24.179467 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 16 00:51:24.179475 kernel: rcu: Hierarchical SRCU implementation. May 16 00:51:24.179482 kernel: rcu: Max phase no-delay instances is 400. May 16 00:51:24.179489 kernel: Platform MSI: ITS@0x100100040000 domain created May 16 00:51:24.179496 kernel: Platform MSI: ITS@0x100100060000 domain created May 16 00:51:24.179504 kernel: Platform MSI: ITS@0x100100080000 domain created May 16 00:51:24.179511 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 16 00:51:24.179519 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 16 00:51:24.179526 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 16 00:51:24.179533 kernel: Platform MSI: ITS@0x100100100000 domain created May 16 00:51:24.179540 kernel: Platform MSI: ITS@0x100100120000 domain created May 16 00:51:24.179547 kernel: PCI/MSI: ITS@0x100100040000 domain created May 16 00:51:24.179554 kernel: PCI/MSI: ITS@0x100100060000 domain created May 16 00:51:24.179561 kernel: PCI/MSI: ITS@0x100100080000 domain created May 16 00:51:24.179570 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 16 00:51:24.179577 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 16 00:51:24.179584 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 16 00:51:24.179591 kernel: PCI/MSI: ITS@0x100100100000 domain created May 16 00:51:24.179598 kernel: PCI/MSI: ITS@0x100100120000 domain created May 16 00:51:24.179605 kernel: Remapping and enabling EFI services. May 16 00:51:24.179612 kernel: smp: Bringing up secondary CPUs ... May 16 00:51:24.179619 kernel: Detected PIPT I-cache on CPU1 May 16 00:51:24.179626 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 16 00:51:24.179634 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 16 00:51:24.179642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179649 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 16 00:51:24.179656 kernel: Detected PIPT I-cache on CPU2 May 16 00:51:24.179663 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 16 00:51:24.179671 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 16 00:51:24.179678 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179685 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 16 00:51:24.179692 kernel: Detected PIPT I-cache on CPU3 May 16 00:51:24.179699 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 16 00:51:24.179707 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 16 00:51:24.179715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179722 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 16 00:51:24.179729 kernel: Detected PIPT I-cache on CPU4 May 16 00:51:24.179736 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 16 00:51:24.179743 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 16 00:51:24.179750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179757 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 16 00:51:24.179764 kernel: Detected PIPT I-cache on CPU5 May 16 00:51:24.179771 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 16 00:51:24.179780 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 16 00:51:24.179787 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179794 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 16 00:51:24.179801 kernel: Detected PIPT I-cache on CPU6 May 16 00:51:24.179809 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 16 00:51:24.179816 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 16 00:51:24.179823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179830 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 16 00:51:24.179837 kernel: Detected PIPT I-cache on CPU7 May 16 00:51:24.179846 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 16 00:51:24.179853 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 16 00:51:24.179860 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179867 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 16 00:51:24.179874 kernel: Detected PIPT I-cache on CPU8 May 16 00:51:24.179882 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 16 00:51:24.179889 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 16 00:51:24.179896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179903 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 16 00:51:24.179910 kernel: Detected PIPT I-cache on CPU9 May 16 00:51:24.179919 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 16 00:51:24.179926 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 16 00:51:24.179933 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179940 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 16 00:51:24.179947 kernel: Detected PIPT I-cache on CPU10 May 16 00:51:24.179954 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 16 00:51:24.179962 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 16 00:51:24.179969 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.179976 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 16 00:51:24.179984 kernel: Detected PIPT I-cache on CPU11 May 16 00:51:24.179991 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 16 00:51:24.179998 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 16 00:51:24.180005 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180012 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 16 00:51:24.180019 kernel: Detected PIPT I-cache on CPU12 May 16 00:51:24.180027 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 16 00:51:24.180034 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 16 00:51:24.180041 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180048 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 16 00:51:24.180056 kernel: Detected PIPT I-cache on CPU13 May 16 00:51:24.180063 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 16 00:51:24.180071 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 16 00:51:24.180078 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180085 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 16 00:51:24.180092 kernel: Detected PIPT I-cache on CPU14 May 16 00:51:24.180099 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 16 00:51:24.180107 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 16 00:51:24.180114 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180122 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 16 00:51:24.180129 kernel: Detected PIPT I-cache on CPU15 May 16 00:51:24.180137 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 16 00:51:24.180144 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 16 00:51:24.180151 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180160 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 16 00:51:24.180167 kernel: Detected PIPT I-cache on CPU16 May 16 00:51:24.180174 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 16 00:51:24.180182 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 16 00:51:24.180199 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180207 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 16 00:51:24.180215 kernel: Detected PIPT I-cache on CPU17 May 16 00:51:24.180222 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 16 00:51:24.180230 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 16 00:51:24.180237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180245 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 16 00:51:24.180252 kernel: Detected PIPT I-cache on CPU18 May 16 00:51:24.180260 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 16 00:51:24.180267 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 16 00:51:24.180276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180284 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 16 00:51:24.180291 kernel: Detected PIPT I-cache on CPU19 May 16 00:51:24.180298 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 16 00:51:24.180306 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 16 00:51:24.180313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180323 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 16 00:51:24.180330 kernel: Detected PIPT I-cache on CPU20 May 16 00:51:24.180338 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 16 00:51:24.180345 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 16 00:51:24.180353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180360 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 16 00:51:24.180368 kernel: Detected PIPT I-cache on CPU21 May 16 00:51:24.180375 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 16 00:51:24.180383 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 16 00:51:24.180392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180399 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 16 00:51:24.180407 kernel: Detected PIPT I-cache on CPU22 May 16 00:51:24.180414 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 16 00:51:24.180422 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 16 00:51:24.180429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180437 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 16 00:51:24.180444 kernel: Detected PIPT I-cache on CPU23 May 16 00:51:24.180452 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 16 00:51:24.180460 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 16 00:51:24.180468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180476 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 16 00:51:24.180483 kernel: Detected PIPT I-cache on CPU24 May 16 00:51:24.180491 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 16 00:51:24.180498 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 16 00:51:24.180506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180513 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 16 00:51:24.180521 kernel: Detected PIPT I-cache on CPU25 May 16 00:51:24.180529 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 16 00:51:24.180537 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 16 00:51:24.180546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180555 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 16 00:51:24.180562 kernel: Detected PIPT I-cache on CPU26 May 16 00:51:24.180570 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 16 00:51:24.180578 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 16 00:51:24.180585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180592 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 16 00:51:24.180600 kernel: Detected PIPT I-cache on CPU27 May 16 00:51:24.180609 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 16 00:51:24.180616 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 16 00:51:24.180624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180631 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 16 00:51:24.180639 kernel: Detected PIPT I-cache on CPU28 May 16 00:51:24.180646 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 16 00:51:24.180654 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 16 00:51:24.180662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180669 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 16 00:51:24.180676 kernel: Detected PIPT I-cache on CPU29 May 16 00:51:24.180685 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 16 00:51:24.180693 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 16 00:51:24.180700 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180708 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 16 00:51:24.180715 kernel: Detected PIPT I-cache on CPU30 May 16 00:51:24.180723 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 16 00:51:24.180730 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 16 00:51:24.180738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180745 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 16 00:51:24.180754 kernel: Detected PIPT I-cache on CPU31 May 16 00:51:24.180762 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 16 00:51:24.180770 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 16 00:51:24.180777 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180784 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 16 00:51:24.180792 kernel: Detected PIPT I-cache on CPU32 May 16 00:51:24.180799 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 16 00:51:24.180807 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 16 00:51:24.180814 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180822 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 16 00:51:24.180831 kernel: Detected PIPT I-cache on CPU33 May 16 00:51:24.180838 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 16 00:51:24.180846 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 16 00:51:24.180854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180861 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 16 00:51:24.180868 kernel: Detected PIPT I-cache on CPU34 May 16 00:51:24.180876 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 16 00:51:24.180884 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 16 00:51:24.180891 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180900 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 16 00:51:24.180908 kernel: Detected PIPT I-cache on CPU35 May 16 00:51:24.180915 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 16 00:51:24.180923 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 16 00:51:24.180931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180938 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 16 00:51:24.180945 kernel: Detected PIPT I-cache on CPU36 May 16 00:51:24.180953 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 16 00:51:24.180960 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 16 00:51:24.180968 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.180977 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 16 00:51:24.180984 kernel: Detected PIPT I-cache on CPU37 May 16 00:51:24.180992 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 16 00:51:24.180999 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 16 00:51:24.181007 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181014 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 16 00:51:24.181022 kernel: Detected PIPT I-cache on CPU38 May 16 00:51:24.181029 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 16 00:51:24.181037 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 16 00:51:24.181046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181053 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 16 00:51:24.181060 kernel: Detected PIPT I-cache on CPU39 May 16 00:51:24.181069 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 16 00:51:24.181077 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 16 00:51:24.181084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181092 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 16 00:51:24.181099 kernel: Detected PIPT I-cache on CPU40 May 16 00:51:24.181108 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 16 00:51:24.181116 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 16 00:51:24.181123 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181131 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 16 00:51:24.181138 kernel: Detected PIPT I-cache on CPU41 May 16 00:51:24.181145 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 16 00:51:24.181219 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 16 00:51:24.181228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181236 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 16 00:51:24.181243 kernel: Detected PIPT I-cache on CPU42 May 16 00:51:24.181253 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 16 00:51:24.181261 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 16 00:51:24.181268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181276 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 16 00:51:24.181283 kernel: Detected PIPT I-cache on CPU43 May 16 00:51:24.181291 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 16 00:51:24.181298 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 16 00:51:24.181306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181314 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 16 00:51:24.181323 kernel: Detected PIPT I-cache on CPU44 May 16 00:51:24.181330 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 16 00:51:24.181338 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 16 00:51:24.181346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181353 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 16 00:51:24.181360 kernel: Detected PIPT I-cache on CPU45 May 16 00:51:24.181368 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 16 00:51:24.181375 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 16 00:51:24.181383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181391 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 16 00:51:24.181400 kernel: Detected PIPT I-cache on CPU46 May 16 00:51:24.181408 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 16 00:51:24.181415 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 16 00:51:24.181423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181430 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 16 00:51:24.181438 kernel: Detected PIPT I-cache on CPU47 May 16 00:51:24.181445 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 16 00:51:24.181453 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 16 00:51:24.181461 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181470 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 16 00:51:24.181477 kernel: Detected PIPT I-cache on CPU48 May 16 00:51:24.181485 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 16 00:51:24.181492 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 16 00:51:24.181500 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181507 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 16 00:51:24.181515 kernel: Detected PIPT I-cache on CPU49 May 16 00:51:24.181522 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 16 00:51:24.181530 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 16 00:51:24.181539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181546 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 16 00:51:24.181554 kernel: Detected PIPT I-cache on CPU50 May 16 00:51:24.181561 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 16 00:51:24.181569 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 16 00:51:24.181576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181584 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 16 00:51:24.181592 kernel: Detected PIPT I-cache on CPU51 May 16 00:51:24.181600 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 16 00:51:24.181608 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 16 00:51:24.181617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181624 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 16 00:51:24.181632 kernel: Detected PIPT I-cache on CPU52 May 16 00:51:24.181639 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 16 00:51:24.181647 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 16 00:51:24.181654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181662 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 16 00:51:24.181670 kernel: Detected PIPT I-cache on CPU53 May 16 00:51:24.181677 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 16 00:51:24.181686 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 16 00:51:24.181694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181701 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 16 00:51:24.181709 kernel: Detected PIPT I-cache on CPU54 May 16 00:51:24.181716 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 16 00:51:24.181724 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 16 00:51:24.181731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181739 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 16 00:51:24.181746 kernel: Detected PIPT I-cache on CPU55 May 16 00:51:24.181754 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 16 00:51:24.181763 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 16 00:51:24.181770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181778 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 16 00:51:24.181785 kernel: Detected PIPT I-cache on CPU56 May 16 00:51:24.181793 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 16 00:51:24.181800 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 16 00:51:24.181808 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181816 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 16 00:51:24.181824 kernel: Detected PIPT I-cache on CPU57 May 16 00:51:24.181832 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 16 00:51:24.181840 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 16 00:51:24.181848 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181855 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 16 00:51:24.181862 kernel: Detected PIPT I-cache on CPU58 May 16 00:51:24.181870 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 16 00:51:24.181878 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 16 00:51:24.181885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181893 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 16 00:51:24.181900 kernel: Detected PIPT I-cache on CPU59 May 16 00:51:24.181909 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 16 00:51:24.181917 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 16 00:51:24.181924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181932 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 16 00:51:24.181939 kernel: Detected PIPT I-cache on CPU60 May 16 00:51:24.181947 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 16 00:51:24.181954 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 16 00:51:24.181962 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.181969 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 16 00:51:24.181978 kernel: Detected PIPT I-cache on CPU61 May 16 00:51:24.181986 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 16 00:51:24.181993 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 16 00:51:24.182001 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182008 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 16 00:51:24.182016 kernel: Detected PIPT I-cache on CPU62 May 16 00:51:24.182023 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 16 00:51:24.182031 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 16 00:51:24.182039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182046 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 16 00:51:24.182055 kernel: Detected PIPT I-cache on CPU63 May 16 00:51:24.182063 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 16 00:51:24.182071 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 16 00:51:24.182078 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182086 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 16 00:51:24.182093 kernel: Detected PIPT I-cache on CPU64 May 16 00:51:24.182100 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 16 00:51:24.182108 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 16 00:51:24.182116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182124 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 16 00:51:24.182132 kernel: Detected PIPT I-cache on CPU65 May 16 00:51:24.182139 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 16 00:51:24.182147 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 16 00:51:24.182157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182165 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 16 00:51:24.182172 kernel: Detected PIPT I-cache on CPU66 May 16 00:51:24.182180 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 16 00:51:24.182187 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 16 00:51:24.182196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182204 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 16 00:51:24.182211 kernel: Detected PIPT I-cache on CPU67 May 16 00:51:24.182219 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 16 00:51:24.182227 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 16 00:51:24.182234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182242 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 16 00:51:24.182249 kernel: Detected PIPT I-cache on CPU68 May 16 00:51:24.182257 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 16 00:51:24.182264 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 16 00:51:24.182273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182281 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 16 00:51:24.182288 kernel: Detected PIPT I-cache on CPU69 May 16 00:51:24.182296 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 16 00:51:24.182304 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 16 00:51:24.182311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182319 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 16 00:51:24.182326 kernel: Detected PIPT I-cache on CPU70 May 16 00:51:24.182333 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 16 00:51:24.182343 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 16 00:51:24.182350 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182358 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 16 00:51:24.182365 kernel: Detected PIPT I-cache on CPU71 May 16 00:51:24.182373 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 16 00:51:24.182380 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 16 00:51:24.182388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182395 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 16 00:51:24.182403 kernel: Detected PIPT I-cache on CPU72 May 16 00:51:24.182410 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 16 00:51:24.182419 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 16 00:51:24.182427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182434 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 16 00:51:24.182442 kernel: Detected PIPT I-cache on CPU73 May 16 00:51:24.182449 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 16 00:51:24.182457 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 16 00:51:24.182464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182472 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 16 00:51:24.182479 kernel: Detected PIPT I-cache on CPU74 May 16 00:51:24.182488 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 16 00:51:24.182495 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 16 00:51:24.182503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182511 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 16 00:51:24.182518 kernel: Detected PIPT I-cache on CPU75 May 16 00:51:24.182526 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 16 00:51:24.182533 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 16 00:51:24.182541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182548 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 16 00:51:24.182556 kernel: Detected PIPT I-cache on CPU76 May 16 00:51:24.182565 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 16 00:51:24.182572 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 16 00:51:24.182580 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182587 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 16 00:51:24.182595 kernel: Detected PIPT I-cache on CPU77 May 16 00:51:24.182602 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 16 00:51:24.182610 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 16 00:51:24.182617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182625 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 16 00:51:24.182634 kernel: Detected PIPT I-cache on CPU78 May 16 00:51:24.182641 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 16 00:51:24.182649 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 16 00:51:24.182656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182664 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 16 00:51:24.182671 kernel: Detected PIPT I-cache on CPU79 May 16 00:51:24.182678 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 16 00:51:24.182686 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 16 00:51:24.182694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:51:24.182701 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 16 00:51:24.182710 kernel: smp: Brought up 1 node, 80 CPUs May 16 00:51:24.182718 kernel: SMP: Total of 80 processors activated. May 16 00:51:24.182725 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:51:24.182733 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:51:24.182740 kernel: CPU features: detected: Common not Private translations May 16 00:51:24.182748 kernel: CPU features: detected: CRC32 instructions May 16 00:51:24.182755 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 00:51:24.182763 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:51:24.182770 kernel: CPU features: detected: LSE atomic instructions May 16 00:51:24.182779 kernel: CPU features: detected: Privileged Access Never May 16 00:51:24.182787 kernel: CPU features: detected: RAS Extension Support May 16 00:51:24.182794 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:51:24.182801 kernel: CPU: All CPU(s) started at EL2 May 16 00:51:24.182809 kernel: alternatives: applying system-wide alternatives May 16 00:51:24.182817 kernel: devtmpfs: initialized May 16 00:51:24.182824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:51:24.182832 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 16 00:51:24.182839 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:51:24.182848 kernel: SMBIOS 3.4.0 present. May 16 00:51:24.182856 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 16 00:51:24.182863 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:51:24.182871 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 16 00:51:24.182878 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:51:24.182886 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:51:24.182893 kernel: audit: initializing netlink subsys (disabled) May 16 00:51:24.182901 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 16 00:51:24.182909 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:51:24.182917 kernel: cpuidle: using governor menu May 16 00:51:24.182925 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:51:24.182932 kernel: ASID allocator initialised with 32768 entries May 16 00:51:24.182939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:51:24.182947 kernel: Serial: AMBA PL011 UART driver May 16 00:51:24.182954 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 00:51:24.182962 kernel: Modules: 0 pages in range for non-PLT usage May 16 00:51:24.182969 kernel: Modules: 508944 pages in range for PLT usage May 16 00:51:24.182977 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:51:24.182986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:51:24.182993 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:51:24.183001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 00:51:24.183008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:51:24.183016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:51:24.183023 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:51:24.183031 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 00:51:24.183038 kernel: ACPI: Added _OSI(Module Device) May 16 00:51:24.183046 kernel: ACPI: Added _OSI(Processor Device) May 16 00:51:24.183054 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:51:24.183062 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:51:24.183069 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 16 00:51:24.183077 kernel: ACPI: Interpreter enabled May 16 00:51:24.183084 kernel: ACPI: Using GIC for interrupt routing May 16 00:51:24.183092 kernel: ACPI: MCFG table detected, 8 entries May 16 00:51:24.183099 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183107 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183114 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183124 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183131 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183139 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183147 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183156 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 16 00:51:24.183164 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 16 00:51:24.183171 kernel: printk: console [ttyAMA0] enabled May 16 00:51:24.183179 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 16 00:51:24.183188 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 16 00:51:24.183316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.183389 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.183452 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.183513 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.183573 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.183633 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 16 00:51:24.183645 kernel: PCI host bridge to bus 000d:00 May 16 00:51:24.183716 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 16 00:51:24.183772 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 16 00:51:24.183828 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 16 00:51:24.183906 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.183981 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.184048 kernel: pci 000d:00:01.0: enabling Extended Tags May 16 00:51:24.184114 kernel: pci 000d:00:01.0: supports D1 D2 May 16 00:51:24.184183 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184258 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.184324 kernel: pci 000d:00:02.0: supports D1 D2 May 16 00:51:24.184387 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184461 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.184528 kernel: pci 000d:00:03.0: supports D1 D2 May 16 00:51:24.184594 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184665 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.184730 kernel: pci 000d:00:04.0: supports D1 D2 May 16 00:51:24.184793 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.184803 kernel: acpiphp: Slot [1] registered May 16 00:51:24.184810 kernel: acpiphp: Slot [2] registered May 16 00:51:24.184820 kernel: acpiphp: Slot [3] registered May 16 00:51:24.184827 kernel: acpiphp: Slot [4] registered May 16 00:51:24.184885 kernel: pci_bus 000d:00: on NUMA node 0 May 16 00:51:24.184950 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.185016 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.185082 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.185147 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.185215 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.185282 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.185347 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.185412 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.185476 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.185540 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.185605 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.185669 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.185736 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 16 00:51:24.185800 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 16 00:51:24.185864 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 16 00:51:24.185928 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 16 00:51:24.185993 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 16 00:51:24.186057 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 16 00:51:24.186121 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 16 00:51:24.186192 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 16 00:51:24.186257 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186321 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186386 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186449 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186513 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186576 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186641 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186708 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186771 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186835 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.186901 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.186966 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.187029 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.187094 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.187161 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.187228 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.187292 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.187356 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 16 00:51:24.187420 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 16 00:51:24.187485 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.187548 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 16 00:51:24.187613 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 16 00:51:24.187680 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.187743 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 16 00:51:24.187808 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 16 00:51:24.187872 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.187936 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 16 00:51:24.187999 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 16 00:51:24.188061 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 16 00:51:24.188117 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 16 00:51:24.188192 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 16 00:51:24.188252 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 16 00:51:24.188320 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 16 00:51:24.188380 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 16 00:51:24.188460 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 16 00:51:24.188518 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 16 00:51:24.188585 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 16 00:51:24.188644 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 16 00:51:24.188654 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 16 00:51:24.188723 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.188788 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.188849 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.188911 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.188971 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 16 00:51:24.189032 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 16 00:51:24.189042 kernel: PCI host bridge to bus 0000:00 May 16 00:51:24.189109 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 16 00:51:24.189172 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 16 00:51:24.189229 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:51:24.189301 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.189373 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.189438 kernel: pci 0000:00:01.0: enabling Extended Tags May 16 00:51:24.189502 kernel: pci 0000:00:01.0: supports D1 D2 May 16 00:51:24.189567 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.189638 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.189703 kernel: pci 0000:00:02.0: supports D1 D2 May 16 00:51:24.189768 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.189840 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.189905 kernel: pci 0000:00:03.0: supports D1 D2 May 16 00:51:24.189968 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.190038 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.190104 kernel: pci 0000:00:04.0: supports D1 D2 May 16 00:51:24.190172 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.190182 kernel: acpiphp: Slot [1-1] registered May 16 00:51:24.190190 kernel: acpiphp: Slot [2-1] registered May 16 00:51:24.190197 kernel: acpiphp: Slot [3-1] registered May 16 00:51:24.190205 kernel: acpiphp: Slot [4-1] registered May 16 00:51:24.190259 kernel: pci_bus 0000:00: on NUMA node 0 May 16 00:51:24.190325 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.190391 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.190456 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.190519 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.190584 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.190648 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.190712 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.190775 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.190842 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.190906 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.190969 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.191034 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.191098 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 16 00:51:24.191167 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 16 00:51:24.191230 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 16 00:51:24.191298 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 16 00:51:24.191361 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 16 00:51:24.191426 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 16 00:51:24.191490 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 16 00:51:24.191554 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 16 00:51:24.191617 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.191681 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.191744 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.191810 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.191874 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.191938 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192002 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192065 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192129 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192197 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192264 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192329 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192393 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192457 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192521 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.192583 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.192648 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.192711 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 16 00:51:24.192775 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 16 00:51:24.192841 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.192904 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 16 00:51:24.192969 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 16 00:51:24.193032 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.193099 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 16 00:51:24.193165 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 16 00:51:24.193231 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.193293 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 16 00:51:24.193358 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 16 00:51:24.193416 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 16 00:51:24.193476 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 16 00:51:24.193544 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 16 00:51:24.193604 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 16 00:51:24.193670 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 16 00:51:24.193729 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 16 00:51:24.193803 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 16 00:51:24.193865 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 16 00:51:24.193931 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 16 00:51:24.193991 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 16 00:51:24.194001 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 16 00:51:24.194071 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.194133 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.194202 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.194263 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.194325 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 16 00:51:24.194386 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 16 00:51:24.194396 kernel: PCI host bridge to bus 0005:00 May 16 00:51:24.194460 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 16 00:51:24.194517 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 16 00:51:24.194576 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 16 00:51:24.194647 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.194719 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.194783 kernel: pci 0005:00:01.0: supports D1 D2 May 16 00:51:24.194847 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.194918 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.194982 kernel: pci 0005:00:03.0: supports D1 D2 May 16 00:51:24.195049 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.195121 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.195191 kernel: pci 0005:00:05.0: supports D1 D2 May 16 00:51:24.195255 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.195329 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 16 00:51:24.195394 kernel: pci 0005:00:07.0: supports D1 D2 May 16 00:51:24.195458 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 16 00:51:24.195471 kernel: acpiphp: Slot [1-2] registered May 16 00:51:24.195478 kernel: acpiphp: Slot [2-2] registered May 16 00:51:24.195552 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 16 00:51:24.195620 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 16 00:51:24.195686 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 16 00:51:24.195760 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 16 00:51:24.195827 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 16 00:51:24.195895 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 16 00:51:24.195954 kernel: pci_bus 0005:00: on NUMA node 0 May 16 00:51:24.196019 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.196085 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.196149 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.196221 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.196285 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.196353 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.196421 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.196496 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.196604 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 16 00:51:24.196675 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.196741 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.196815 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 16 00:51:24.196881 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 16 00:51:24.196945 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 16 00:51:24.197010 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 16 00:51:24.197073 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 16 00:51:24.197138 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 16 00:51:24.197206 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 16 00:51:24.197271 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 16 00:51:24.197337 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 16 00:51:24.197402 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197465 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197531 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197597 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197662 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197725 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197789 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197856 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.197920 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.197984 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198046 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.198110 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198177 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.198242 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198306 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.198371 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.198438 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.198503 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 16 00:51:24.198568 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 16 00:51:24.198633 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 16 00:51:24.198697 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 16 00:51:24.198762 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 16 00:51:24.198834 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 16 00:51:24.198899 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 16 00:51:24.198965 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 16 00:51:24.199028 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 16 00:51:24.199092 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 16 00:51:24.199162 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 16 00:51:24.199229 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 16 00:51:24.199295 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 16 00:51:24.199360 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 16 00:51:24.199424 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 16 00:51:24.199499 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 16 00:51:24.199556 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 16 00:51:24.199626 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 16 00:51:24.199686 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 16 00:51:24.199764 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 16 00:51:24.199824 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 16 00:51:24.199889 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 16 00:51:24.199951 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 16 00:51:24.200017 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 16 00:51:24.200080 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 16 00:51:24.200090 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 16 00:51:24.200162 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.200227 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.200288 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.200352 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.200413 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.200477 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 16 00:51:24.200487 kernel: PCI host bridge to bus 0003:00 May 16 00:51:24.200553 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 16 00:51:24.200611 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 16 00:51:24.200669 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 16 00:51:24.200740 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.200813 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.200881 kernel: pci 0003:00:01.0: supports D1 D2 May 16 00:51:24.200946 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.201016 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.201082 kernel: pci 0003:00:03.0: supports D1 D2 May 16 00:51:24.201146 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.201221 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.201288 kernel: pci 0003:00:05.0: supports D1 D2 May 16 00:51:24.201354 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.201364 kernel: acpiphp: Slot [1-3] registered May 16 00:51:24.201372 kernel: acpiphp: Slot [2-3] registered May 16 00:51:24.201444 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 16 00:51:24.201512 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 16 00:51:24.201578 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 16 00:51:24.201643 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 16 00:51:24.201711 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 16 00:51:24.201776 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 16 00:51:24.201844 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 16 00:51:24.201909 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 16 00:51:24.201976 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 16 00:51:24.202042 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 16 00:51:24.202116 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 16 00:51:24.202191 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 16 00:51:24.202262 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 16 00:51:24.202334 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 16 00:51:24.202405 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 16 00:51:24.202475 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 16 00:51:24.202542 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 16 00:51:24.202609 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 16 00:51:24.202678 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 16 00:51:24.202736 kernel: pci_bus 0003:00: on NUMA node 0 May 16 00:51:24.202802 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.202866 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.202930 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.202995 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.203060 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.203123 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.203194 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 16 00:51:24.203257 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 16 00:51:24.203323 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 16 00:51:24.203397 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 16 00:51:24.203464 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 16 00:51:24.203527 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 16 00:51:24.203591 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 16 00:51:24.203658 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 16 00:51:24.203722 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.203789 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.203852 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.203917 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.203980 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204046 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204109 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204276 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204344 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204407 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204470 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.204532 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.204594 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.204657 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 16 00:51:24.204719 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 16 00:51:24.204786 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 16 00:51:24.204851 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 16 00:51:24.204916 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 16 00:51:24.204982 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 16 00:51:24.205049 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 16 00:51:24.205113 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 16 00:51:24.205186 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 16 00:51:24.205251 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 16 00:51:24.205317 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 16 00:51:24.205382 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 16 00:51:24.205448 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 16 00:51:24.205514 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205578 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.205644 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205709 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.205774 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205837 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.205902 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 16 00:51:24.205967 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 16 00:51:24.206030 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 16 00:51:24.206092 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 16 00:51:24.206160 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 16 00:51:24.206219 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 16 00:51:24.206275 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 16 00:51:24.206331 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 16 00:51:24.206406 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 16 00:51:24.206466 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 16 00:51:24.206534 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 16 00:51:24.206593 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 16 00:51:24.206657 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 16 00:51:24.206716 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 16 00:51:24.206726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 16 00:51:24.206793 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.206858 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.206919 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.206979 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.207040 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.207100 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 16 00:51:24.207111 kernel: PCI host bridge to bus 000c:00 May 16 00:51:24.207178 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 16 00:51:24.207237 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 16 00:51:24.207294 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 16 00:51:24.207364 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.207436 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.207500 kernel: pci 000c:00:01.0: enabling Extended Tags May 16 00:51:24.207564 kernel: pci 000c:00:01.0: supports D1 D2 May 16 00:51:24.207626 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.207698 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.207762 kernel: pci 000c:00:02.0: supports D1 D2 May 16 00:51:24.207825 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.207896 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.207959 kernel: pci 000c:00:03.0: supports D1 D2 May 16 00:51:24.208022 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.208091 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.208161 kernel: pci 000c:00:04.0: supports D1 D2 May 16 00:51:24.208225 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.208235 kernel: acpiphp: Slot [1-4] registered May 16 00:51:24.208243 kernel: acpiphp: Slot [2-4] registered May 16 00:51:24.208251 kernel: acpiphp: Slot [3-2] registered May 16 00:51:24.208259 kernel: acpiphp: Slot [4-2] registered May 16 00:51:24.208314 kernel: pci_bus 000c:00: on NUMA node 0 May 16 00:51:24.208377 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.208443 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.208507 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.208571 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.208635 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.208701 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.208766 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.208830 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.208894 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.208958 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.209023 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.209088 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.209155 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 16 00:51:24.209220 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 16 00:51:24.209286 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 16 00:51:24.209352 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 16 00:51:24.209418 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 16 00:51:24.209481 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 16 00:51:24.209547 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 16 00:51:24.209611 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 16 00:51:24.209676 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.209739 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.209804 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.209869 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.209935 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.209999 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210063 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210127 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210194 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210259 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210323 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210387 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210453 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210518 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210581 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.210647 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.210711 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.210776 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 16 00:51:24.210840 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 16 00:51:24.210907 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.210970 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 16 00:51:24.211036 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 16 00:51:24.211101 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.211168 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 16 00:51:24.211234 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 16 00:51:24.211298 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.211366 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 16 00:51:24.211430 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 16 00:51:24.211490 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 16 00:51:24.211547 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 16 00:51:24.211618 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 16 00:51:24.211677 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 16 00:51:24.211755 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 16 00:51:24.211815 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 16 00:51:24.211883 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 16 00:51:24.211942 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 16 00:51:24.212010 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 16 00:51:24.212069 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 16 00:51:24.212081 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 16 00:51:24.212156 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.212222 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.212284 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.212347 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.212408 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 16 00:51:24.212474 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 16 00:51:24.212487 kernel: PCI host bridge to bus 0002:00 May 16 00:51:24.212553 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 16 00:51:24.212612 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 16 00:51:24.212668 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 16 00:51:24.212741 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.212812 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.212881 kernel: pci 0002:00:01.0: supports D1 D2 May 16 00:51:24.212948 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213020 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.213085 kernel: pci 0002:00:03.0: supports D1 D2 May 16 00:51:24.213149 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213224 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.213289 kernel: pci 0002:00:05.0: supports D1 D2 May 16 00:51:24.213354 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213429 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 16 00:51:24.213495 kernel: pci 0002:00:07.0: supports D1 D2 May 16 00:51:24.213559 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 16 00:51:24.213569 kernel: acpiphp: Slot [1-5] registered May 16 00:51:24.213577 kernel: acpiphp: Slot [2-5] registered May 16 00:51:24.213585 kernel: acpiphp: Slot [3-3] registered May 16 00:51:24.213592 kernel: acpiphp: Slot [4-3] registered May 16 00:51:24.213649 kernel: pci_bus 0002:00: on NUMA node 0 May 16 00:51:24.213716 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.213780 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.213844 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 16 00:51:24.213912 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.213977 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.214042 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.214106 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.214174 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.214241 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.214306 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.214371 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.214438 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.214504 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 16 00:51:24.214568 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 16 00:51:24.214633 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 16 00:51:24.214696 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 16 00:51:24.214761 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 16 00:51:24.214824 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 16 00:51:24.214889 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 16 00:51:24.214956 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 16 00:51:24.215021 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215084 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215150 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215217 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215283 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215347 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215411 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215478 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215542 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215607 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215670 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215734 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215798 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215863 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.215927 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.215991 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.216056 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.216121 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 16 00:51:24.216190 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 16 00:51:24.216256 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 16 00:51:24.216323 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 16 00:51:24.216388 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 16 00:51:24.216456 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 16 00:51:24.216520 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 16 00:51:24.216587 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 16 00:51:24.216651 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 16 00:51:24.216717 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 16 00:51:24.216781 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 16 00:51:24.216844 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 16 00:51:24.216901 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 16 00:51:24.216973 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 16 00:51:24.217034 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 16 00:51:24.217102 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 16 00:51:24.217325 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 16 00:51:24.217410 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 16 00:51:24.217472 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 16 00:51:24.217538 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 16 00:51:24.217601 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 16 00:51:24.217613 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 16 00:51:24.217683 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.217744 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.217807 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.217867 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.217927 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 16 00:51:24.218006 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 16 00:51:24.218017 kernel: PCI host bridge to bus 0001:00 May 16 00:51:24.218082 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 16 00:51:24.218140 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 16 00:51:24.218205 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 16 00:51:24.218276 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 16 00:51:24.218348 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 16 00:51:24.218413 kernel: pci 0001:00:01.0: enabling Extended Tags May 16 00:51:24.218476 kernel: pci 0001:00:01.0: supports D1 D2 May 16 00:51:24.218539 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.218610 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 16 00:51:24.218676 kernel: pci 0001:00:02.0: supports D1 D2 May 16 00:51:24.218739 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 16 00:51:24.218810 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 16 00:51:24.218873 kernel: pci 0001:00:03.0: supports D1 D2 May 16 00:51:24.218936 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.219005 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 16 00:51:24.219072 kernel: pci 0001:00:04.0: supports D1 D2 May 16 00:51:24.219136 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 16 00:51:24.219146 kernel: acpiphp: Slot [1-6] registered May 16 00:51:24.219223 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 16 00:51:24.219289 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 16 00:51:24.219353 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 16 00:51:24.219418 kernel: pci 0001:01:00.0: PME# supported from D3cold May 16 00:51:24.219485 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 16 00:51:24.219557 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 16 00:51:24.219622 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 16 00:51:24.219688 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 16 00:51:24.219752 kernel: pci 0001:01:00.1: PME# supported from D3cold May 16 00:51:24.219762 kernel: acpiphp: Slot [2-6] registered May 16 00:51:24.219770 kernel: acpiphp: Slot [3-4] registered May 16 00:51:24.219778 kernel: acpiphp: Slot [4-4] registered May 16 00:51:24.219835 kernel: pci_bus 0001:00: on NUMA node 0 May 16 00:51:24.219899 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 16 00:51:24.219965 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 16 00:51:24.220028 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.220091 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 16 00:51:24.220283 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.220362 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.220430 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.220496 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.220560 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.220623 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.220687 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 16 00:51:24.220752 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 16 00:51:24.220815 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 16 00:51:24.220880 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 16 00:51:24.220952 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 16 00:51:24.221016 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 16 00:51:24.221078 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 16 00:51:24.221142 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 16 00:51:24.221214 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221279 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221341 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221407 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221470 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221533 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221596 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221659 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221722 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221785 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221847 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.221911 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.221974 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.222036 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.222101 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.222168 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.222235 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 16 00:51:24.222302 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 16 00:51:24.222367 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 16 00:51:24.222434 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 16 00:51:24.222497 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 16 00:51:24.222560 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 16 00:51:24.222623 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 16 00:51:24.222687 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 16 00:51:24.222749 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 16 00:51:24.222812 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 16 00:51:24.222877 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.222940 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 16 00:51:24.223003 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 16 00:51:24.223067 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 16 00:51:24.223130 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 16 00:51:24.223198 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 16 00:51:24.223259 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 16 00:51:24.223316 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 16 00:51:24.223393 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 16 00:51:24.223452 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 16 00:51:24.223518 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 16 00:51:24.223577 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 16 00:51:24.223644 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 16 00:51:24.223703 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 16 00:51:24.223768 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 16 00:51:24.223826 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 16 00:51:24.223836 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 16 00:51:24.223904 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:51:24.223968 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 16 00:51:24.224029 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 16 00:51:24.224089 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 16 00:51:24.224150 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 16 00:51:24.224434 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 16 00:51:24.224446 kernel: PCI host bridge to bus 0004:00 May 16 00:51:24.224512 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 16 00:51:24.224573 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 16 00:51:24.224628 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 16 00:51:24.224698 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 16 00:51:24.224768 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 16 00:51:24.224832 kernel: pci 0004:00:01.0: supports D1 D2 May 16 00:51:24.224894 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 16 00:51:24.224963 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 16 00:51:24.225032 kernel: pci 0004:00:03.0: supports D1 D2 May 16 00:51:24.225094 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 16 00:51:24.225168 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 16 00:51:24.225233 kernel: pci 0004:00:05.0: supports D1 D2 May 16 00:51:24.225295 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 16 00:51:24.225368 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 16 00:51:24.225434 kernel: pci 0004:01:00.0: enabling Extended Tags May 16 00:51:24.225502 kernel: pci 0004:01:00.0: supports D1 D2 May 16 00:51:24.225567 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 16 00:51:24.225645 kernel: pci_bus 0004:02: extended config space not accessible May 16 00:51:24.225719 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 16 00:51:24.225787 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 16 00:51:24.225855 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 16 00:51:24.225922 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 16 00:51:24.225992 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 16 00:51:24.226059 kernel: pci 0004:02:00.0: supports D1 D2 May 16 00:51:24.226126 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 16 00:51:24.226204 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 16 00:51:24.226273 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 16 00:51:24.226338 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 16 00:51:24.226395 kernel: pci_bus 0004:00: on NUMA node 0 May 16 00:51:24.226461 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 16 00:51:24.226525 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 16 00:51:24.226589 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 16 00:51:24.226651 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 16 00:51:24.226715 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 16 00:51:24.226778 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.226841 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 16 00:51:24.226907 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 16 00:51:24.226970 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 16 00:51:24.227034 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 16 00:51:24.227097 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 16 00:51:24.227163 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 16 00:51:24.227226 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 16 00:51:24.227290 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227353 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227418 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227481 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227543 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227605 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227668 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227731 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227795 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227857 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.227922 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.227984 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.228050 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 16 00:51:24.228116 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 16 00:51:24.228184 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 16 00:51:24.228254 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 16 00:51:24.228321 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 16 00:51:24.228388 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 16 00:51:24.228458 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 16 00:51:24.228523 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 16 00:51:24.228587 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 16 00:51:24.228650 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 16 00:51:24.228714 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 16 00:51:24.228776 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 16 00:51:24.228843 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 16 00:51:24.228906 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 16 00:51:24.228971 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 16 00:51:24.229034 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 16 00:51:24.229098 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 16 00:51:24.229163 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 16 00:51:24.229227 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 16 00:51:24.229285 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 16 00:51:24.229346 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 16 00:51:24.229402 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 16 00:51:24.229469 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 16 00:51:24.229529 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 16 00:51:24.229591 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 16 00:51:24.229658 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 16 00:51:24.229716 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 16 00:51:24.229783 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 16 00:51:24.229841 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 16 00:51:24.229852 kernel: iommu: Default domain type: Translated May 16 00:51:24.229860 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:51:24.229868 kernel: efivars: Registered efivars operations May 16 00:51:24.229934 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 16 00:51:24.230004 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 16 00:51:24.230072 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 16 00:51:24.230085 kernel: vgaarb: loaded May 16 00:51:24.230093 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:51:24.230101 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:51:24.230109 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:51:24.230117 kernel: pnp: PnP ACPI init May 16 00:51:24.230189 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 16 00:51:24.230248 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 16 00:51:24.230308 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 16 00:51:24.230365 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 16 00:51:24.230423 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 16 00:51:24.230479 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 16 00:51:24.230537 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 16 00:51:24.230594 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 16 00:51:24.230604 kernel: pnp: PnP ACPI: found 1 devices May 16 00:51:24.230615 kernel: NET: Registered PF_INET protocol family May 16 00:51:24.230623 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230631 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 16 00:51:24.230639 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:51:24.230647 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:51:24.230655 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230664 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 16 00:51:24.230672 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230681 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 16 00:51:24.230689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:51:24.230756 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 16 00:51:24.230767 kernel: kvm [1]: IPA Size Limit: 48 bits May 16 00:51:24.230775 kernel: kvm [1]: GICv3: no GICV resource entry May 16 00:51:24.230783 kernel: kvm [1]: disabling GICv2 emulation May 16 00:51:24.230791 kernel: kvm [1]: GIC system register CPU interface enabled May 16 00:51:24.230799 kernel: kvm [1]: vgic interrupt IRQ9 May 16 00:51:24.230807 kernel: kvm [1]: VHE mode initialized successfully May 16 00:51:24.230817 kernel: Initialise system trusted keyrings May 16 00:51:24.230825 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 16 00:51:24.230833 kernel: Key type asymmetric registered May 16 00:51:24.230840 kernel: Asymmetric key parser 'x509' registered May 16 00:51:24.230848 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 00:51:24.230856 kernel: io scheduler mq-deadline registered May 16 00:51:24.230864 kernel: io scheduler kyber registered May 16 00:51:24.230872 kernel: io scheduler bfq registered May 16 00:51:24.230880 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:51:24.230888 kernel: ACPI: button: Power Button [PWRB] May 16 00:51:24.230897 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 16 00:51:24.230905 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:51:24.230975 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 16 00:51:24.231036 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.231096 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.231158 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 16 00:51:24.231219 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 16 00:51:24.231279 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 16 00:51:24.231346 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 16 00:51:24.231405 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.231464 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.231522 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 16 00:51:24.231580 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 16 00:51:24.231640 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 16 00:51:24.231706 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 16 00:51:24.231764 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.231823 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.231881 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 16 00:51:24.231942 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 16 00:51:24.232000 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 16 00:51:24.232069 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 16 00:51:24.232128 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.232191 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.232250 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 16 00:51:24.232308 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 16 00:51:24.232367 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 16 00:51:24.232440 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 16 00:51:24.232502 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.232560 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.232619 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 16 00:51:24.232676 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 16 00:51:24.232735 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 16 00:51:24.232800 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 16 00:51:24.232862 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.232920 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.232979 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 16 00:51:24.233036 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 16 00:51:24.233095 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 16 00:51:24.233167 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 16 00:51:24.233229 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.233288 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.233347 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 16 00:51:24.233407 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 16 00:51:24.233465 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 16 00:51:24.233529 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 16 00:51:24.233590 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 16 00:51:24.233649 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 16 00:51:24.233707 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 16 00:51:24.233766 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 16 00:51:24.233826 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 16 00:51:24.233837 kernel: thunder_xcv, ver 1.0 May 16 00:51:24.233845 kernel: thunder_bgx, ver 1.0 May 16 00:51:24.233853 kernel: nicpf, ver 1.0 May 16 00:51:24.233863 kernel: nicvf, ver 1.0 May 16 00:51:24.233927 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:51:24.233986 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:51:22 UTC (1747356682) May 16 00:51:24.233997 kernel: efifb: probing for efifb May 16 00:51:24.234005 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 16 00:51:24.234013 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 16 00:51:24.234021 kernel: efifb: scrolling: redraw May 16 00:51:24.234029 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:51:24.234039 kernel: Console: switching to colour frame buffer device 100x37 May 16 00:51:24.234047 kernel: fb0: EFI VGA frame buffer device May 16 00:51:24.234055 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 16 00:51:24.234063 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:51:24.234072 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:51:24.234079 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 16 00:51:24.234088 kernel: watchdog: Hard watchdog permanently disabled May 16 00:51:24.234096 kernel: NET: Registered PF_INET6 protocol family May 16 00:51:24.234103 kernel: Segment Routing with IPv6 May 16 00:51:24.234113 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:51:24.234120 kernel: NET: Registered PF_PACKET protocol family May 16 00:51:24.234128 kernel: Key type dns_resolver registered May 16 00:51:24.234136 kernel: registered taskstats version 1 May 16 00:51:24.234144 kernel: Loading compiled-in X.509 certificates May 16 00:51:24.234156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: c5ee9c587519d4ef57ff0de9630e786a4c7faded' May 16 00:51:24.234164 kernel: Key type .fscrypt registered May 16 00:51:24.234172 kernel: Key type fscrypt-provisioning registered May 16 00:51:24.234179 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:51:24.234190 kernel: ima: Allocated hash algorithm: sha1 May 16 00:51:24.234198 kernel: ima: No architecture policies found May 16 00:51:24.234206 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:51:24.234273 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 16 00:51:24.234338 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 16 00:51:24.234403 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 16 00:51:24.234467 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 16 00:51:24.234532 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 16 00:51:24.234596 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 16 00:51:24.234662 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 16 00:51:24.234726 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 16 00:51:24.234791 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 16 00:51:24.234855 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 16 00:51:24.234919 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 16 00:51:24.234982 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 16 00:51:24.235047 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 16 00:51:24.235111 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 16 00:51:24.235181 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 16 00:51:24.235246 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 16 00:51:24.235311 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 16 00:51:24.235375 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 16 00:51:24.235440 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 16 00:51:24.235503 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 16 00:51:24.235568 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 16 00:51:24.235630 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 16 00:51:24.235697 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 16 00:51:24.235760 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 16 00:51:24.235825 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 16 00:51:24.235888 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 16 00:51:24.235952 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 16 00:51:24.236015 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 16 00:51:24.236080 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 16 00:51:24.236144 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 16 00:51:24.236213 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 16 00:51:24.236279 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 16 00:51:24.236344 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 16 00:51:24.236408 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 16 00:51:24.236472 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 16 00:51:24.236536 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 16 00:51:24.236599 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 16 00:51:24.236662 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 16 00:51:24.236727 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 16 00:51:24.236793 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 16 00:51:24.236856 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 16 00:51:24.236920 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 16 00:51:24.236984 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 16 00:51:24.237048 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 16 00:51:24.237112 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 16 00:51:24.237179 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 16 00:51:24.237244 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 16 00:51:24.237311 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 16 00:51:24.237374 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 16 00:51:24.237437 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 16 00:51:24.237501 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 16 00:51:24.237564 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 16 00:51:24.237629 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 16 00:51:24.237691 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 16 00:51:24.237756 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 16 00:51:24.237822 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 16 00:51:24.237887 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 16 00:51:24.237951 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 16 00:51:24.238017 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 16 00:51:24.238080 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 16 00:51:24.238147 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 16 00:51:24.238161 kernel: clk: Disabling unused clocks May 16 00:51:24.238169 kernel: Freeing unused kernel memory: 39744K May 16 00:51:24.238179 kernel: Run /init as init process May 16 00:51:24.238187 kernel: with arguments: May 16 00:51:24.238195 kernel: /init May 16 00:51:24.238203 kernel: with environment: May 16 00:51:24.238210 kernel: HOME=/ May 16 00:51:24.238218 kernel: TERM=linux May 16 00:51:24.238225 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:51:24.238235 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:51:24.238247 systemd[1]: Detected architecture arm64. May 16 00:51:24.238256 systemd[1]: Running in initrd. May 16 00:51:24.238264 systemd[1]: No hostname configured, using default hostname. May 16 00:51:24.238272 systemd[1]: Hostname set to . May 16 00:51:24.238280 systemd[1]: Initializing machine ID from random generator. May 16 00:51:24.238289 systemd[1]: Queued start job for default target initrd.target. May 16 00:51:24.238297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:51:24.238306 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:51:24.238316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:51:24.238325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:51:24.238333 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:51:24.238342 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:51:24.238352 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:51:24.238361 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:51:24.238369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:51:24.238379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:51:24.238387 systemd[1]: Reached target paths.target - Path Units. May 16 00:51:24.238396 systemd[1]: Reached target slices.target - Slice Units. May 16 00:51:24.238404 systemd[1]: Reached target swap.target - Swaps. May 16 00:51:24.238412 systemd[1]: Reached target timers.target - Timer Units. May 16 00:51:24.238420 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:51:24.238429 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:51:24.238437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:51:24.238447 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:51:24.238455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:51:24.238463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:51:24.238472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:51:24.238480 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:51:24.238488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:51:24.238497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:51:24.238505 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:51:24.238513 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:51:24.238523 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:51:24.238531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:51:24.238561 systemd-journald[899]: Collecting audit messages is disabled. May 16 00:51:24.238582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:24.238592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:51:24.238600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:51:24.238608 kernel: Bridge firewalling registered May 16 00:51:24.238617 systemd-journald[899]: Journal started May 16 00:51:24.238636 systemd-journald[899]: Runtime Journal (/run/log/journal/ce0abba997da429ea636b185849751e9) is 8.0M, max 4.0G, 3.9G free. May 16 00:51:24.197418 systemd-modules-load[901]: Inserted module 'overlay' May 16 00:51:24.273309 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:51:24.221151 systemd-modules-load[901]: Inserted module 'br_netfilter' May 16 00:51:24.279867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:51:24.291474 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:51:24.302190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:51:24.313515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:24.340359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:51:24.357875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:51:24.365460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:51:24.377019 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:51:24.393130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:24.409446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:51:24.426360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:51:24.437779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:51:24.467308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:51:24.480593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:51:24.487354 dracut-cmdline[945]: dracut-dracut-053 May 16 00:51:24.500605 dracut-cmdline[945]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 16 00:51:24.494744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:51:24.508936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:51:24.517039 systemd-resolved[955]: Positive Trust Anchors: May 16 00:51:24.517341 systemd-resolved[955]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:51:24.517373 systemd-resolved[955]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:51:24.532432 systemd-resolved[955]: Defaulting to hostname 'linux'. May 16 00:51:24.545947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:51:24.661617 kernel: SCSI subsystem initialized May 16 00:51:24.565576 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:51:24.677430 kernel: Loading iSCSI transport class v2.0-870. May 16 00:51:24.691159 kernel: iscsi: registered transport (tcp) May 16 00:51:24.718265 kernel: iscsi: registered transport (qla4xxx) May 16 00:51:24.718295 kernel: QLogic iSCSI HBA Driver May 16 00:51:24.761707 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:51:24.786353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:51:24.831448 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:51:24.831478 kernel: device-mapper: uevent: version 1.0.3 May 16 00:51:24.841214 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:51:24.907164 kernel: raid6: neonx8 gen() 15839 MB/s May 16 00:51:24.933163 kernel: raid6: neonx4 gen() 15716 MB/s May 16 00:51:24.959162 kernel: raid6: neonx2 gen() 13456 MB/s May 16 00:51:24.984163 kernel: raid6: neonx1 gen() 10523 MB/s May 16 00:51:25.009163 kernel: raid6: int64x8 gen() 6984 MB/s May 16 00:51:25.034163 kernel: raid6: int64x4 gen() 7384 MB/s May 16 00:51:25.059163 kernel: raid6: int64x2 gen() 6153 MB/s May 16 00:51:25.087290 kernel: raid6: int64x1 gen() 5077 MB/s May 16 00:51:25.087311 kernel: raid6: using algorithm neonx8 gen() 15839 MB/s May 16 00:51:25.121692 kernel: raid6: .... xor() 11972 MB/s, rmw enabled May 16 00:51:25.121716 kernel: raid6: using neon recovery algorithm May 16 00:51:25.141163 kernel: xor: measuring software checksum speed May 16 00:51:25.141186 kernel: 8regs : 19617 MB/sec May 16 00:51:25.157159 kernel: 32regs : 19399 MB/sec May 16 00:51:25.168362 kernel: arm64_neon : 26708 MB/sec May 16 00:51:25.168382 kernel: xor: using function: arm64_neon (26708 MB/sec) May 16 00:51:25.229161 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:51:25.238965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:51:25.259283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:51:25.272911 systemd-udevd[1147]: Using default interface naming scheme 'v255'. May 16 00:51:25.275952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:51:25.299301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:51:25.313628 dracut-pre-trigger[1159]: rd.md=0: removing MD RAID activation May 16 00:51:25.339931 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:51:25.360326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:51:25.466129 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:51:25.494853 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:51:25.494889 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:51:25.516289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:51:25.561254 kernel: ACPI: bus type USB registered May 16 00:51:25.561268 kernel: usbcore: registered new interface driver usbfs May 16 00:51:25.561278 kernel: usbcore: registered new interface driver hub May 16 00:51:25.561288 kernel: usbcore: registered new device driver usb May 16 00:51:25.561297 kernel: PTP clock support registered May 16 00:51:25.556497 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:51:25.718070 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 16 00:51:25.718083 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 16 00:51:25.718092 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 16 00:51:25.718250 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 16 00:51:25.718345 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 16 00:51:25.718424 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 16 00:51:25.718503 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 16 00:51:25.718579 kernel: nvme 0005:03:00.0: Adding to iommu group 33 May 16 00:51:25.718667 kernel: igb 0003:03:00.0: added PHC on eth0 May 16 00:51:25.718746 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 34 May 16 00:51:25.718831 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 16 00:51:25.718907 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 16 00:51:25.718990 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c4 May 16 00:51:25.715860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:51:25.723776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:51:25.740257 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:51:25.791802 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 16 00:51:25.791942 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 16 00:51:25.792029 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 16 00:51:25.768365 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:51:25.808772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:51:25.808867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:25.825495 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:51:25.836480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:51:25.836525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:25.853716 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:25.876260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:25.885915 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:51:26.070920 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 16 00:51:26.071132 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 16 00:51:26.071220 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 16 00:51:26.071297 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 16 00:51:26.071372 kernel: nvme nvme0: pci function 0005:03:00.0 May 16 00:51:26.071466 kernel: hub 1-0:1.0: USB hub found May 16 00:51:26.071562 kernel: hub 1-0:1.0: 4 ports detected May 16 00:51:26.071641 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 16 00:51:26.071732 kernel: hub 2-0:1.0: USB hub found May 16 00:51:26.071816 kernel: hub 2-0:1.0: 4 ports detected May 16 00:51:26.071893 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 16 00:51:26.071981 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 16 00:51:26.072062 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 16 00:51:26.072133 kernel: nvme nvme1: pci function 0005:04:00.0 May 16 00:51:26.081431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:26.110268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:51:26.125681 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 16 00:51:26.149528 kernel: igb 0003:03:00.1: added PHC on eth1 May 16 00:51:26.149715 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 16 00:51:26.161267 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c5 May 16 00:51:26.173229 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 16 00:51:26.183119 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 16 00:51:26.204158 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 16 00:51:26.204657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:26.282654 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:51:26.282669 kernel: GPT:9289727 != 1875385007 May 16 00:51:26.282678 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:51:26.282687 kernel: GPT:9289727 != 1875385007 May 16 00:51:26.282696 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:51:26.282706 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:51:26.282715 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 16 00:51:26.284159 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 16 00:51:26.308164 kernel: BTRFS: device fsid 462ff9f1-7a02-4839-b355-edf30dab0598 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (1209) May 16 00:51:26.308179 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 16 00:51:26.308278 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1238) May 16 00:51:26.312408 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 16 00:51:26.369944 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 16 00:51:26.379255 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 16 00:51:26.401365 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 16 00:51:26.417232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 16 00:51:26.422520 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 16 00:51:26.442220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 16 00:51:26.465262 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:51:26.491892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:51:26.491907 disk-uuid[1310]: Primary Header is updated. May 16 00:51:26.491907 disk-uuid[1310]: Secondary Entries is updated. May 16 00:51:26.491907 disk-uuid[1310]: Secondary Header is updated. May 16 00:51:26.526680 kernel: hub 1-3:1.0: USB hub found May 16 00:51:26.526835 kernel: hub 1-3:1.0: 4 ports detected May 16 00:51:26.619164 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 16 00:51:26.654269 kernel: hub 2-3:1.0: USB hub found May 16 00:51:26.654477 kernel: hub 2-3:1.0: 4 ports detected May 16 00:51:26.699165 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 16 00:51:26.712158 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 16 00:51:26.735266 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 16 00:51:26.735416 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 16 00:51:27.081314 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 16 00:51:27.387164 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 16 00:51:27.402161 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 16 00:51:27.423160 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 16 00:51:27.491169 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 16 00:51:27.491554 disk-uuid[1311]: The operation has completed successfully. May 16 00:51:27.516997 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:51:27.517081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:51:27.549297 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:51:27.554317 sh[1482]: Success May 16 00:51:27.578159 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:51:27.609947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:51:27.629388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:51:27.639731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:51:27.645158 kernel: BTRFS info (device dm-0): first mount of filesystem 462ff9f1-7a02-4839-b355-edf30dab0598 May 16 00:51:27.645175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:27.645185 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:51:27.645196 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:51:27.645205 kernel: BTRFS info (device dm-0): using free space tree May 16 00:51:27.649157 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 16 00:51:27.730747 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:51:27.737353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:51:27.755257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:51:27.761186 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:51:27.871800 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:27.871821 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:27.871831 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:51:27.871841 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:51:27.871851 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 16 00:51:27.871860 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:27.868139 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:51:27.893352 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:51:27.903942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:51:27.930293 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:51:27.950741 systemd-networkd[1683]: lo: Link UP May 16 00:51:27.950747 systemd-networkd[1683]: lo: Gained carrier May 16 00:51:27.954560 systemd-networkd[1683]: Enumeration completed May 16 00:51:27.954675 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:51:27.956012 systemd-networkd[1683]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:27.961477 systemd[1]: Reached target network.target - Network. May 16 00:51:27.982111 ignition[1670]: Ignition 2.20.0 May 16 00:51:27.992953 unknown[1670]: fetched base config from "system" May 16 00:51:27.982117 ignition[1670]: Stage: fetch-offline May 16 00:51:27.992961 unknown[1670]: fetched user config from "system" May 16 00:51:27.982196 ignition[1670]: no configs at "/usr/lib/ignition/base.d" May 16 00:51:27.995485 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:51:27.982204 ignition[1670]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:28.003559 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:51:27.982514 ignition[1670]: parsed url from cmdline: "" May 16 00:51:28.007597 systemd-networkd[1683]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:27.982517 ignition[1670]: no config URL provided May 16 00:51:28.018305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:51:27.982522 ignition[1670]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:51:28.061034 systemd-networkd[1683]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:27.982574 ignition[1670]: parsing config with SHA512: 9719527db92e4fcbc8b37b4802d2fb6285d108097370fec6c5cb007d1b3dfcddaf3c7b2caeb54f200264e71629d8b23850fc2e94bee5aef46319b741df576c24 May 16 00:51:27.993410 ignition[1670]: fetch-offline: fetch-offline passed May 16 00:51:27.993415 ignition[1670]: POST message to Packet Timeline May 16 00:51:27.993420 ignition[1670]: POST Status error: resource requires networking May 16 00:51:27.993491 ignition[1670]: Ignition finished successfully May 16 00:51:28.031768 ignition[1709]: Ignition 2.20.0 May 16 00:51:28.031773 ignition[1709]: Stage: kargs May 16 00:51:28.032001 ignition[1709]: no configs at "/usr/lib/ignition/base.d" May 16 00:51:28.032010 ignition[1709]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:28.033466 ignition[1709]: kargs: kargs passed May 16 00:51:28.033486 ignition[1709]: POST message to Packet Timeline May 16 00:51:28.033708 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:28.036944 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43214->[::1]:53: read: connection refused May 16 00:51:28.237186 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #2 May 16 00:51:28.237865 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53035->[::1]:53: read: connection refused May 16 00:51:28.640821 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #3 May 16 00:51:28.641453 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51312->[::1]:53: read: connection refused May 16 00:51:28.664530 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 16 00:51:28.656331 systemd-networkd[1683]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:51:29.279166 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 16 00:51:29.282207 systemd-networkd[1683]: eno1: Link UP May 16 00:51:29.282409 systemd-networkd[1683]: eno2: Link UP May 16 00:51:29.282534 systemd-networkd[1683]: enP1p1s0f0np0: Link UP May 16 00:51:29.282678 systemd-networkd[1683]: enP1p1s0f0np0: Gained carrier May 16 00:51:29.293396 systemd-networkd[1683]: enP1p1s0f1np1: Link UP May 16 00:51:29.333183 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv4 address 147.28.151.230/30, gateway 147.28.151.229 acquired from 147.28.144.140 May 16 00:51:29.442435 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #4 May 16 00:51:29.442873 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52608->[::1]:53: read: connection refused May 16 00:51:29.659636 systemd-networkd[1683]: enP1p1s0f1np1: Gained carrier May 16 00:51:30.595239 systemd-networkd[1683]: enP1p1s0f0np0: Gained IPv6LL May 16 00:51:30.787215 systemd-networkd[1683]: enP1p1s0f1np1: Gained IPv6LL May 16 00:51:31.043602 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #5 May 16 00:51:31.044058 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56469->[::1]:53: read: connection refused May 16 00:51:34.246515 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #6 May 16 00:51:34.896072 ignition[1709]: GET result: OK May 16 00:51:35.197573 ignition[1709]: Ignition finished successfully May 16 00:51:35.201321 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:51:35.215373 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:51:35.226448 ignition[1726]: Ignition 2.20.0 May 16 00:51:35.226455 ignition[1726]: Stage: disks May 16 00:51:35.226615 ignition[1726]: no configs at "/usr/lib/ignition/base.d" May 16 00:51:35.226624 ignition[1726]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:35.227539 ignition[1726]: disks: disks passed May 16 00:51:35.227544 ignition[1726]: POST message to Packet Timeline May 16 00:51:35.227564 ignition[1726]: GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:35.700444 ignition[1726]: GET result: OK May 16 00:51:35.986279 ignition[1726]: Ignition finished successfully May 16 00:51:35.988169 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:51:35.994625 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:51:36.002170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:51:36.010157 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:51:36.018592 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:51:36.027418 systemd[1]: Reached target basic.target - Basic System. May 16 00:51:36.051255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:51:36.066524 systemd-fsck[1745]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:51:36.070126 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:51:36.092253 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:51:36.160177 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 759e3456-2e58-4307-81e1-19f20d3141c2 r/w with ordered data mode. Quota mode: none. May 16 00:51:36.160509 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:51:36.170698 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:51:36.192233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:51:36.200158 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1760) May 16 00:51:36.200176 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:36.200187 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:36.200196 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:51:36.201157 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:51:36.201169 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 16 00:51:36.293212 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:51:36.299592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 16 00:51:36.311200 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 16 00:51:36.325871 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:51:36.325940 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:51:36.358047 coreos-metadata[1779]: May 16 00:51:36.354 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 16 00:51:36.374616 coreos-metadata[1778]: May 16 00:51:36.354 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 16 00:51:36.338736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:51:36.352613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:51:36.374331 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:51:36.407525 initrd-setup-root[1798]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:51:36.413572 initrd-setup-root[1805]: cut: /sysroot/etc/group: No such file or directory May 16 00:51:36.419894 initrd-setup-root[1813]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:51:36.426232 initrd-setup-root[1820]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:51:36.495312 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:51:36.515239 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:51:36.523160 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:36.546241 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:51:36.552629 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:51:36.571687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:51:36.577037 ignition[1894]: INFO : Ignition 2.20.0 May 16 00:51:36.577037 ignition[1894]: INFO : Stage: mount May 16 00:51:36.577037 ignition[1894]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:51:36.577037 ignition[1894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:36.577037 ignition[1894]: INFO : mount: mount passed May 16 00:51:36.577037 ignition[1894]: INFO : POST message to Packet Timeline May 16 00:51:36.577037 ignition[1894]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:36.857729 coreos-metadata[1778]: May 16 00:51:36.857 INFO Fetch successful May 16 00:51:36.904405 coreos-metadata[1778]: May 16 00:51:36.904 INFO wrote hostname ci-4152.2.3-n-16e7659192 to /sysroot/etc/hostname May 16 00:51:36.907548 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 16 00:51:37.086804 ignition[1894]: INFO : GET result: OK May 16 00:51:37.136987 coreos-metadata[1779]: May 16 00:51:37.136 INFO Fetch successful May 16 00:51:37.184920 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 16 00:51:37.185064 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 16 00:51:37.420394 ignition[1894]: INFO : Ignition finished successfully May 16 00:51:37.422684 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:51:37.440256 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:51:37.452298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:51:37.487645 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1924) May 16 00:51:37.487679 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 16 00:51:37.501930 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 16 00:51:37.514845 kernel: BTRFS info (device nvme0n1p6): using free space tree May 16 00:51:37.537573 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 16 00:51:37.537598 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 16 00:51:37.545593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:51:37.574768 ignition[1941]: INFO : Ignition 2.20.0 May 16 00:51:37.574768 ignition[1941]: INFO : Stage: files May 16 00:51:37.584209 ignition[1941]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:51:37.584209 ignition[1941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:37.584209 ignition[1941]: DEBUG : files: compiled without relabeling support, skipping May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:51:37.584209 ignition[1941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:51:37.584209 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 00:51:37.584209 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 16 00:51:37.580356 unknown[1941]: wrote ssh authorized keys file for user: core May 16 00:51:37.704162 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:51:37.960889 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:37.971251 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 16 00:51:38.484132 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 00:51:38.805826 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:51:38.818394 ignition[1941]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:51:38.818394 ignition[1941]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:51:38.818394 ignition[1941]: INFO : files: files passed May 16 00:51:38.818394 ignition[1941]: INFO : POST message to Packet Timeline May 16 00:51:38.818394 ignition[1941]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:39.414507 ignition[1941]: INFO : GET result: OK May 16 00:51:40.607163 ignition[1941]: INFO : Ignition finished successfully May 16 00:51:40.610342 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:51:40.628282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:51:40.634928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:51:40.646670 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:51:40.646746 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:51:40.664941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:51:40.698950 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:51:40.698950 initrd-setup-root-after-ignition[1981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:51:40.676934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:51:40.733686 initrd-setup-root-after-ignition[1985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:51:40.697414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:51:40.730417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:51:40.730499 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:51:40.739588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:51:40.756249 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:51:40.772898 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:51:40.784289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:51:40.811353 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:51:40.835326 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:51:40.849671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:51:40.858615 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:51:40.869807 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:51:40.881062 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:51:40.881160 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:51:40.892429 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:51:40.903256 systemd[1]: Stopped target basic.target - Basic System. May 16 00:51:40.914334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:51:40.925395 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:51:40.936332 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:51:40.947301 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:51:40.958240 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:51:40.969210 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:51:40.980189 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:51:40.996556 systemd[1]: Stopped target swap.target - Swaps. May 16 00:51:41.007612 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:51:41.007701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:51:41.018901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:51:41.029796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:51:41.040928 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:51:41.044191 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:51:41.052077 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:51:41.052169 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:51:41.063353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:51:41.063436 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:51:41.074559 systemd[1]: Stopped target paths.target - Path Units. May 16 00:51:41.085545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:51:41.089173 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:51:41.102433 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:51:41.113759 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:51:41.125121 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:51:41.125210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:51:41.223276 ignition[2010]: INFO : Ignition 2.20.0 May 16 00:51:41.223276 ignition[2010]: INFO : Stage: umount May 16 00:51:41.223276 ignition[2010]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:51:41.223276 ignition[2010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 16 00:51:41.223276 ignition[2010]: INFO : umount: umount passed May 16 00:51:41.223276 ignition[2010]: INFO : POST message to Packet Timeline May 16 00:51:41.223276 ignition[2010]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 16 00:51:41.136544 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:51:41.136646 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:51:41.148067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:51:41.148159 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:51:41.159582 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:51:41.159661 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:51:41.171020 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 16 00:51:41.171102 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 16 00:51:41.197277 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:51:41.205512 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:51:41.205611 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:51:41.218109 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:51:41.229228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:51:41.229332 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:51:41.240829 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:51:41.240909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:51:41.254274 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:51:41.255043 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:51:41.255127 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:51:41.264793 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:51:41.264865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:51:41.782622 ignition[2010]: INFO : GET result: OK May 16 00:51:42.070441 ignition[2010]: INFO : Ignition finished successfully May 16 00:51:42.072664 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:51:42.072850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:51:42.080567 systemd[1]: Stopped target network.target - Network. May 16 00:51:42.089814 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:51:42.089874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:51:42.099536 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:51:42.099568 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:51:42.108940 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:51:42.108972 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:51:42.118291 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:51:42.118336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:51:42.127875 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:51:42.127905 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:51:42.137616 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:51:42.143175 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv6 lease lost May 16 00:51:42.147091 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:51:42.153266 systemd-networkd[1683]: enP1p1s0f1np1: DHCPv6 lease lost May 16 00:51:42.156883 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:51:42.156981 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:51:42.169213 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:51:42.169403 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:51:42.178076 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:51:42.178285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:51:42.198297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:51:42.205394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:51:42.205442 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:51:42.215224 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:51:42.215258 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:51:42.224998 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:51:42.225027 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:51:42.235215 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:51:42.235245 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:51:42.245467 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:51:42.264430 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:51:42.264563 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:51:42.273701 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:51:42.273873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:51:42.282500 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:51:42.282523 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:51:42.293034 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:51:42.293072 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:51:42.303924 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:51:42.303974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:51:42.319494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:51:42.319545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:51:42.336250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:51:42.346989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:51:42.347035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:51:42.357999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:51:42.358031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:42.369487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:51:42.369554 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:51:42.924525 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:51:42.924686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:51:42.935835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:51:42.956342 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:51:42.969812 systemd[1]: Switching root. May 16 00:51:43.024985 systemd-journald[899]: Journal stopped May 16 00:51:44.972804 systemd-journald[899]: Received SIGTERM from PID 1 (systemd). May 16 00:51:44.972832 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:51:44.972842 kernel: SELinux: policy capability open_perms=1 May 16 00:51:44.972850 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:51:44.972858 kernel: SELinux: policy capability always_check_network=0 May 16 00:51:44.972866 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:51:44.972875 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:51:44.972884 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:51:44.972893 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:51:44.972900 kernel: audit: type=1403 audit(1747356703.208:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:51:44.972909 systemd[1]: Successfully loaded SELinux policy in 115.132ms. May 16 00:51:44.972921 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.437ms. May 16 00:51:44.972931 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:51:44.972939 systemd[1]: Detected architecture arm64. May 16 00:51:44.972950 systemd[1]: Detected first boot. May 16 00:51:44.972959 systemd[1]: Hostname set to . May 16 00:51:44.972968 systemd[1]: Initializing machine ID from random generator. May 16 00:51:44.972977 zram_generator::config[2084]: No configuration found. May 16 00:51:44.972987 systemd[1]: Populated /etc with preset unit settings. May 16 00:51:44.972996 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:51:44.973005 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:51:44.973014 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:51:44.973023 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:51:44.973032 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:51:44.973041 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:51:44.973050 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:51:44.973061 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:51:44.973070 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:51:44.973079 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:51:44.973088 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:51:44.973097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:51:44.973106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:51:44.973116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:51:44.973126 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:51:44.973135 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:51:44.973144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:51:44.973156 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 00:51:44.973165 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:51:44.973174 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:51:44.973183 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:51:44.973194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:51:44.973204 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:51:44.973214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:51:44.973224 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:51:44.973233 systemd[1]: Reached target slices.target - Slice Units. May 16 00:51:44.973242 systemd[1]: Reached target swap.target - Swaps. May 16 00:51:44.973251 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:51:44.973260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:51:44.973269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:51:44.973280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:51:44.973289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:51:44.973299 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:51:44.973309 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:51:44.973320 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:51:44.973331 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:51:44.973340 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:51:44.973349 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:51:44.973359 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:51:44.973369 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:51:44.973378 systemd[1]: Reached target machines.target - Containers. May 16 00:51:44.973388 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:51:44.973397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:51:44.973408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:51:44.973417 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:51:44.973426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:51:44.973436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:51:44.973445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:51:44.973454 kernel: ACPI: bus type drm_connector registered May 16 00:51:44.973463 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:51:44.973472 kernel: fuse: init (API version 7.39) May 16 00:51:44.973481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:51:44.973491 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:51:44.973501 kernel: loop: module loaded May 16 00:51:44.973509 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:51:44.973519 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:51:44.973528 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:51:44.973537 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:51:44.973546 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:51:44.973555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:51:44.973583 systemd-journald[2193]: Collecting audit messages is disabled. May 16 00:51:44.973603 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:51:44.973613 systemd-journald[2193]: Journal started May 16 00:51:44.973633 systemd-journald[2193]: Runtime Journal (/run/log/journal/c0dc9e2b91fd4ba7b309f75b37f3dcaa) is 8.0M, max 4.0G, 3.9G free. May 16 00:51:43.720725 systemd[1]: Queued start job for default target multi-user.target. May 16 00:51:43.738612 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 16 00:51:43.738921 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:51:43.739223 systemd[1]: systemd-journald.service: Consumed 3.310s CPU time. May 16 00:51:45.024170 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:51:45.045180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:51:45.067513 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:51:45.067550 systemd[1]: Stopped verity-setup.service. May 16 00:51:45.092174 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:51:45.097332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:51:45.102722 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:51:45.108006 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:51:45.113212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:51:45.118404 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:51:45.123528 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:51:45.128771 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:51:45.134122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:51:45.139525 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:51:45.139692 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:51:45.145014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:51:45.146189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:51:45.151421 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:51:45.152266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:51:45.157344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:51:45.157493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:51:45.162498 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:51:45.164185 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:51:45.169277 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:51:45.169417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:51:45.176357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:51:45.181176 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:51:45.186193 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:51:45.191006 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:51:45.206310 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:51:45.231245 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:51:45.237105 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:51:45.241801 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:51:45.241831 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:51:45.247339 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 00:51:45.253026 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:51:45.258907 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:51:45.263679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:51:45.265326 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:51:45.271086 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:51:45.275813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:51:45.276991 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:51:45.281713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:51:45.282391 systemd-journald[2193]: Time spent on flushing to /var/log/journal/c0dc9e2b91fd4ba7b309f75b37f3dcaa is 25.218ms for 2346 entries. May 16 00:51:45.282391 systemd-journald[2193]: System Journal (/var/log/journal/c0dc9e2b91fd4ba7b309f75b37f3dcaa) is 8.0M, max 195.6M, 187.6M free. May 16 00:51:45.324385 systemd-journald[2193]: Received client request to flush runtime journal. May 16 00:51:45.324430 kernel: loop0: detected capacity change from 0 to 113536 May 16 00:51:45.282921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:51:45.300449 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:51:45.306128 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:51:45.311838 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:51:45.338160 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:51:45.346657 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:51:45.351258 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:51:45.356320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:51:45.360920 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:51:45.365643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:51:45.368164 kernel: loop1: detected capacity change from 0 to 8 May 16 00:51:45.380052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:51:45.385703 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:51:45.397075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:51:45.413163 kernel: loop2: detected capacity change from 0 to 116808 May 16 00:51:45.418375 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 00:51:45.424545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:51:45.430291 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:51:45.430965 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 00:51:45.436763 udevadm[2238]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 00:51:45.446911 systemd-tmpfiles[2260]: ACLs are not supported, ignoring. May 16 00:51:45.446923 systemd-tmpfiles[2260]: ACLs are not supported, ignoring. May 16 00:51:45.450677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:51:45.475166 kernel: loop3: detected capacity change from 0 to 207008 May 16 00:51:45.511312 ldconfig[2224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:51:45.513048 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:51:45.515161 kernel: loop4: detected capacity change from 0 to 113536 May 16 00:51:45.538164 kernel: loop5: detected capacity change from 0 to 8 May 16 00:51:45.550167 kernel: loop6: detected capacity change from 0 to 116808 May 16 00:51:45.565166 kernel: loop7: detected capacity change from 0 to 207008 May 16 00:51:45.571365 (sd-merge)[2277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 16 00:51:45.571783 (sd-merge)[2277]: Merged extensions into '/usr'. May 16 00:51:45.574715 systemd[1]: Reloading requested from client PID 2232 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:51:45.574727 systemd[1]: Reloading... May 16 00:51:45.617164 zram_generator::config[2305]: No configuration found. May 16 00:51:45.709143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:51:45.756287 systemd[1]: Reloading finished in 181 ms. May 16 00:51:45.789222 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:51:45.794095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:51:45.812306 systemd[1]: Starting ensure-sysext.service... May 16 00:51:45.818171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:51:45.824509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:51:45.831127 systemd[1]: Reloading requested from client PID 2358 ('systemctl') (unit ensure-sysext.service)... May 16 00:51:45.831138 systemd[1]: Reloading... May 16 00:51:45.837631 systemd-tmpfiles[2360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:51:45.837876 systemd-tmpfiles[2360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:51:45.838493 systemd-tmpfiles[2360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:51:45.838699 systemd-tmpfiles[2360]: ACLs are not supported, ignoring. May 16 00:51:45.838747 systemd-tmpfiles[2360]: ACLs are not supported, ignoring. May 16 00:51:45.841338 systemd-tmpfiles[2360]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:51:45.841346 systemd-tmpfiles[2360]: Skipping /boot May 16 00:51:45.848286 systemd-tmpfiles[2360]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:51:45.848294 systemd-tmpfiles[2360]: Skipping /boot May 16 00:51:45.849590 systemd-udevd[2361]: Using default interface naming scheme 'v255'. May 16 00:51:45.878161 zram_generator::config[2396]: No configuration found. May 16 00:51:45.909178 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2436) May 16 00:51:45.919160 kernel: IPMI message handler: version 39.2 May 16 00:51:45.930159 kernel: ipmi device interface May 16 00:51:45.941162 kernel: ipmi_ssif: IPMI SSIF Interface driver May 16 00:51:45.941185 kernel: ipmi_si: IPMI System Interface driver May 16 00:51:45.954662 kernel: ipmi_si: Unable to find any System Interface(s) May 16 00:51:45.987974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:51:46.049940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 16 00:51:46.054468 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 00:51:46.054773 systemd[1]: Reloading finished in 223 ms. May 16 00:51:46.072790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:51:46.088511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:51:46.108628 systemd[1]: Finished ensure-sysext.service. May 16 00:51:46.114014 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:51:46.144400 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:51:46.150357 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:51:46.155475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:51:46.156704 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:51:46.162651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:51:46.168451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:51:46.174149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:51:46.175233 lvm[2537]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:51:46.179973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:51:46.184917 augenrules[2558]: No rules May 16 00:51:46.184963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:51:46.185913 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:51:46.191756 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:51:46.198356 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:51:46.204994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:51:46.211232 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:51:46.216829 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:51:46.222384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:51:46.227646 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:51:46.228415 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:51:46.233396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:51:46.238350 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:51:46.243120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:51:46.243256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:51:46.247984 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:51:46.248103 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:51:46.252869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:51:46.252992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:51:46.257920 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:51:46.258042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:51:46.262788 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:51:46.267707 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:51:46.274472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:51:46.285978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:51:46.301379 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:51:46.305711 lvm[2588]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:51:46.305819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:51:46.305884 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:51:46.307098 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:51:46.313551 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:51:46.318196 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:51:46.318649 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:51:46.324065 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:51:46.339219 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:51:46.346351 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:51:46.402237 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:51:46.406781 systemd-resolved[2567]: Positive Trust Anchors: May 16 00:51:46.406858 systemd-resolved[2567]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:51:46.406890 systemd-resolved[2567]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:51:46.407157 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:51:46.410608 systemd-resolved[2567]: Using system hostname 'ci-4152.2.3-n-16e7659192'. May 16 00:51:46.412095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:51:46.414028 systemd-networkd[2566]: lo: Link UP May 16 00:51:46.414035 systemd-networkd[2566]: lo: Gained carrier May 16 00:51:46.417401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:51:46.417728 systemd-networkd[2566]: bond0: netdev ready May 16 00:51:46.421681 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:51:46.426119 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:51:46.427215 systemd-networkd[2566]: Enumeration completed May 16 00:51:46.430310 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:51:46.434749 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:51:46.436477 systemd-networkd[2566]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:52:20:00.network. May 16 00:51:46.439038 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:51:46.443335 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:51:46.447646 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:51:46.447668 systemd[1]: Reached target paths.target - Path Units. May 16 00:51:46.451944 systemd[1]: Reached target timers.target - Timer Units. May 16 00:51:46.456866 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:51:46.462601 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:51:46.477438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:51:46.482264 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:51:46.486723 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:51:46.491176 systemd[1]: Reached target network.target - Network. May 16 00:51:46.495543 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:51:46.499696 systemd[1]: Reached target basic.target - Basic System. May 16 00:51:46.503796 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:51:46.503817 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:51:46.515226 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:51:46.520629 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 00:51:46.526025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:51:46.531520 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:51:46.537053 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:51:46.541368 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:51:46.541938 jq[2621]: false May 16 00:51:46.542403 coreos-metadata[2617]: May 16 00:51:46.542 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 16 00:51:46.542481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:51:46.545703 coreos-metadata[2617]: May 16 00:51:46.545 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 16 00:51:46.547824 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:51:46.548131 dbus-daemon[2618]: [system] SELinux support is enabled May 16 00:51:46.553272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:51:46.556289 extend-filesystems[2623]: Found loop4 May 16 00:51:46.562173 extend-filesystems[2623]: Found loop5 May 16 00:51:46.562173 extend-filesystems[2623]: Found loop6 May 16 00:51:46.562173 extend-filesystems[2623]: Found loop7 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p1 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p2 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p3 May 16 00:51:46.562173 extend-filesystems[2623]: Found usr May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p4 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p6 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p7 May 16 00:51:46.562173 extend-filesystems[2623]: Found nvme0n1p9 May 16 00:51:46.562173 extend-filesystems[2623]: Checking size of /dev/nvme0n1p9 May 16 00:51:46.678902 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 16 00:51:46.678936 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2476) May 16 00:51:46.558959 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:51:46.679103 extend-filesystems[2623]: Resized partition /dev/nvme0n1p9 May 16 00:51:46.570571 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:51:46.687467 extend-filesystems[2643]: resize2fs 1.47.1 (20-May-2024) May 16 00:51:46.576639 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:51:46.616831 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:51:46.617469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:51:46.618160 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:51:46.692253 update_engine[2659]: I20250516 00:51:46.660553 2659 main.cc:92] Flatcar Update Engine starting May 16 00:51:46.692253 update_engine[2659]: I20250516 00:51:46.663189 2659 update_check_scheduler.cc:74] Next update check in 3m34s May 16 00:51:46.624653 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:51:46.692513 jq[2660]: true May 16 00:51:46.632944 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:51:46.667591 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:51:46.667790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:51:46.668079 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:51:46.668241 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:51:46.669801 systemd-logind[2644]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:51:46.670376 systemd-logind[2644]: New seat seat0. May 16 00:51:46.675453 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:51:46.675617 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:51:46.687765 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:51:46.692665 (ntainerd)[2665]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:51:46.696101 jq[2664]: true May 16 00:51:46.702915 dbus-daemon[2618]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 00:51:46.703880 tar[2663]: linux-arm64/LICENSE May 16 00:51:46.704037 tar[2663]: linux-arm64/helm May 16 00:51:46.710950 systemd[1]: Started update-engine.service - Update Engine. May 16 00:51:46.715934 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:51:46.716089 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:51:46.720592 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:51:46.720694 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:51:46.724531 bash[2690]: Updated "/home/core/.ssh/authorized_keys" May 16 00:51:46.726321 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:51:46.734055 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:51:46.741120 systemd[1]: Starting sshkeys.service... May 16 00:51:46.753473 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 00:51:46.759194 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 00:51:46.767339 locksmithd[2691]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:51:46.779140 coreos-metadata[2704]: May 16 00:51:46.779 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 16 00:51:46.780260 coreos-metadata[2704]: May 16 00:51:46.780 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 16 00:51:46.828493 containerd[2665]: time="2025-05-16T00:51:46.828410960Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:51:46.851011 containerd[2665]: time="2025-05-16T00:51:46.850971960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.852293 containerd[2665]: time="2025-05-16T00:51:46.852225840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:51:46.852293 containerd[2665]: time="2025-05-16T00:51:46.852253040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:51:46.852293 containerd[2665]: time="2025-05-16T00:51:46.852268200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:51:46.852455 containerd[2665]: time="2025-05-16T00:51:46.852436360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:51:46.852482 containerd[2665]: time="2025-05-16T00:51:46.852457000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.852545 containerd[2665]: time="2025-05-16T00:51:46.852507720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:51:46.852545 containerd[2665]: time="2025-05-16T00:51:46.852519280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.852684 containerd[2665]: time="2025-05-16T00:51:46.852667080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:51:46.852684 containerd[2665]: time="2025-05-16T00:51:46.852681160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.852730 containerd[2665]: time="2025-05-16T00:51:46.852694760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:51:46.852730 containerd[2665]: time="2025-05-16T00:51:46.852704840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.852817 containerd[2665]: time="2025-05-16T00:51:46.852771240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.853180 containerd[2665]: time="2025-05-16T00:51:46.853160240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:51:46.853283 containerd[2665]: time="2025-05-16T00:51:46.853263240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:51:46.853283 containerd[2665]: time="2025-05-16T00:51:46.853277480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:51:46.853376 containerd[2665]: time="2025-05-16T00:51:46.853365160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:51:46.853413 containerd[2665]: time="2025-05-16T00:51:46.853404360Z" level=info msg="metadata content store policy set" policy=shared May 16 00:51:46.860353 containerd[2665]: time="2025-05-16T00:51:46.860327800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:51:46.860418 containerd[2665]: time="2025-05-16T00:51:46.860387560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:51:46.860418 containerd[2665]: time="2025-05-16T00:51:46.860402960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:51:46.860466 containerd[2665]: time="2025-05-16T00:51:46.860418440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:51:46.860466 containerd[2665]: time="2025-05-16T00:51:46.860432760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:51:46.860594 containerd[2665]: time="2025-05-16T00:51:46.860566320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:51:46.860792 containerd[2665]: time="2025-05-16T00:51:46.860779280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:51:46.860896 containerd[2665]: time="2025-05-16T00:51:46.860874200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:51:46.860896 containerd[2665]: time="2025-05-16T00:51:46.860891040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:51:46.860946 containerd[2665]: time="2025-05-16T00:51:46.860906920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:51:46.860946 containerd[2665]: time="2025-05-16T00:51:46.860920720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:51:46.860946 containerd[2665]: time="2025-05-16T00:51:46.860933200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:51:46.860946 containerd[2665]: time="2025-05-16T00:51:46.860944600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:51:46.861014 containerd[2665]: time="2025-05-16T00:51:46.860957840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:51:46.861014 containerd[2665]: time="2025-05-16T00:51:46.860972480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:51:46.861014 containerd[2665]: time="2025-05-16T00:51:46.860985880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:51:46.861014 containerd[2665]: time="2025-05-16T00:51:46.860997600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:51:46.861014 containerd[2665]: time="2025-05-16T00:51:46.861008400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:51:46.861094 containerd[2665]: time="2025-05-16T00:51:46.861027840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861094 containerd[2665]: time="2025-05-16T00:51:46.861049720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861094 containerd[2665]: time="2025-05-16T00:51:46.861063600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861094 containerd[2665]: time="2025-05-16T00:51:46.861076640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861094 containerd[2665]: time="2025-05-16T00:51:46.861088200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861101080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861112040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861125120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861137880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861151800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861170120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861190 containerd[2665]: time="2025-05-16T00:51:46.861181560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861307 containerd[2665]: time="2025-05-16T00:51:46.861193160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861307 containerd[2665]: time="2025-05-16T00:51:46.861208840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:51:46.861307 containerd[2665]: time="2025-05-16T00:51:46.861228200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861307 containerd[2665]: time="2025-05-16T00:51:46.861248440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861307 containerd[2665]: time="2025-05-16T00:51:46.861259160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:51:46.861486 containerd[2665]: time="2025-05-16T00:51:46.861425040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:51:46.861486 containerd[2665]: time="2025-05-16T00:51:46.861441800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:51:46.861486 containerd[2665]: time="2025-05-16T00:51:46.861452680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:51:46.861486 containerd[2665]: time="2025-05-16T00:51:46.861464040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:51:46.861486 containerd[2665]: time="2025-05-16T00:51:46.861473400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861486 containerd[2665]: time="2025-05-16T00:51:46.861485680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:51:46.861652 containerd[2665]: time="2025-05-16T00:51:46.861497040Z" level=info msg="NRI interface is disabled by configuration." May 16 00:51:46.861652 containerd[2665]: time="2025-05-16T00:51:46.861509040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:51:46.861877 containerd[2665]: time="2025-05-16T00:51:46.861832280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:51:46.861985 containerd[2665]: time="2025-05-16T00:51:46.861881040Z" level=info msg="Connect containerd service" May 16 00:51:46.861985 containerd[2665]: time="2025-05-16T00:51:46.861908040Z" level=info msg="using legacy CRI server" May 16 00:51:46.861985 containerd[2665]: time="2025-05-16T00:51:46.861915320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:51:46.862157 containerd[2665]: time="2025-05-16T00:51:46.862142360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:51:46.862749 containerd[2665]: time="2025-05-16T00:51:46.862727160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:51:46.862953 containerd[2665]: time="2025-05-16T00:51:46.862917920Z" level=info msg="Start subscribing containerd event" May 16 00:51:46.862987 containerd[2665]: time="2025-05-16T00:51:46.862971840Z" level=info msg="Start recovering state" May 16 00:51:46.863050 containerd[2665]: time="2025-05-16T00:51:46.863035560Z" level=info msg="Start event monitor" May 16 00:51:46.863076 containerd[2665]: time="2025-05-16T00:51:46.863049160Z" level=info msg="Start snapshots syncer" May 16 00:51:46.863076 containerd[2665]: time="2025-05-16T00:51:46.863060520Z" level=info msg="Start cni network conf syncer for default" May 16 00:51:46.863076 containerd[2665]: time="2025-05-16T00:51:46.863069680Z" level=info msg="Start streaming server" May 16 00:51:46.863276 containerd[2665]: time="2025-05-16T00:51:46.863257480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:51:46.863320 containerd[2665]: time="2025-05-16T00:51:46.863299320Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:51:46.863378 containerd[2665]: time="2025-05-16T00:51:46.863351720Z" level=info msg="containerd successfully booted in 0.036243s" May 16 00:51:46.863407 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:51:47.030025 tar[2663]: linux-arm64/README.md May 16 00:51:47.047193 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:51:47.126164 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 16 00:51:47.142512 extend-filesystems[2643]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 16 00:51:47.142512 extend-filesystems[2643]: old_desc_blocks = 1, new_desc_blocks = 112 May 16 00:51:47.142512 extend-filesystems[2643]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 16 00:51:47.169533 extend-filesystems[2623]: Resized filesystem in /dev/nvme0n1p9 May 16 00:51:47.169533 extend-filesystems[2623]: Found nvme1n1 May 16 00:51:47.145237 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:51:47.145531 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:51:47.362710 sshd_keygen[2648]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:51:47.381437 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:51:47.404562 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:51:47.413327 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:51:47.413545 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:51:47.419827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:51:47.432207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:51:47.438099 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:51:47.443936 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 00:51:47.448843 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:51:47.545858 coreos-metadata[2617]: May 16 00:51:47.545 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 16 00:51:47.546331 coreos-metadata[2617]: May 16 00:51:47.546 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 16 00:51:47.780399 coreos-metadata[2704]: May 16 00:51:47.780 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 16 00:51:47.780780 coreos-metadata[2704]: May 16 00:51:47.780 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 16 00:51:47.785161 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 16 00:51:47.802163 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 16 00:51:47.803106 systemd-networkd[2566]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:52:20:01.network. May 16 00:51:48.401164 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 16 00:51:48.417874 systemd-networkd[2566]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 16 00:51:48.418162 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 16 00:51:48.419299 systemd-networkd[2566]: enP1p1s0f0np0: Link UP May 16 00:51:48.420034 systemd-networkd[2566]: enP1p1s0f0np0: Gained carrier May 16 00:51:48.438161 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 16 00:51:48.450037 systemd-networkd[2566]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:52:20:00.network. May 16 00:51:48.450536 systemd-networkd[2566]: enP1p1s0f1np1: Link UP May 16 00:51:48.451229 systemd-networkd[2566]: enP1p1s0f1np1: Gained carrier May 16 00:51:48.464379 systemd-networkd[2566]: bond0: Link UP May 16 00:51:48.464712 systemd-networkd[2566]: bond0: Gained carrier May 16 00:51:48.464902 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:48.465470 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:48.465807 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:48.465948 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:48.542232 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 16 00:51:48.542270 kernel: bond0: active interface up! May 16 00:51:48.667165 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 16 00:51:49.539586 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:49.546420 coreos-metadata[2617]: May 16 00:51:49.546 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 16 00:51:49.780891 coreos-metadata[2704]: May 16 00:51:49.780 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 16 00:51:50.499226 systemd-networkd[2566]: bond0: Gained IPv6LL May 16 00:51:50.499502 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:50.502100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:51:50.507953 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:51:50.524346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:51:50.530819 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:51:50.552230 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:51:51.211979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:51:51.217892 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:51:51.614840 kubelet[2772]: E0516 00:51:51.614801 2772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:51:51.617113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:51:51.617262 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:51:51.675476 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:51:51.691462 systemd[1]: Started sshd@0-147.28.151.230:22-139.178.89.65:43736.service - OpenSSH per-connection server daemon (139.178.89.65:43736). May 16 00:51:52.099449 coreos-metadata[2617]: May 16 00:51:52.099 INFO Fetch successful May 16 00:51:52.103409 coreos-metadata[2704]: May 16 00:51:52.103 INFO Fetch successful May 16 00:51:52.114316 sshd[2794]: Accepted publickey for core from 139.178.89.65 port 43736 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:52.115855 sshd-session[2794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:52.123706 systemd-logind[2644]: New session 1 of user core. May 16 00:51:52.125102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:51:52.145388 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:51:52.154074 unknown[2704]: wrote ssh authorized keys file for user: core May 16 00:51:52.155599 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:51:52.166384 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:51:52.173726 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 00:51:52.174754 (systemd)[2804]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:51:52.175318 update-ssh-keys[2798]: Updated "/home/core/.ssh/authorized_keys" May 16 00:51:52.180208 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 00:51:52.186469 systemd[1]: Finished sshkeys.service. May 16 00:51:52.196365 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 16 00:51:52.234121 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 16 00:51:52.234338 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 16 00:51:52.307216 systemd[2804]: Queued start job for default target default.target. May 16 00:51:52.316234 systemd[2804]: Created slice app.slice - User Application Slice. May 16 00:51:52.316259 systemd[2804]: Reached target paths.target - Paths. May 16 00:51:52.316271 systemd[2804]: Reached target timers.target - Timers. May 16 00:51:52.317510 systemd[2804]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:51:52.326347 systemd[2804]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:51:52.326400 systemd[2804]: Reached target sockets.target - Sockets. May 16 00:51:52.326412 systemd[2804]: Reached target basic.target - Basic System. May 16 00:51:52.326454 systemd[2804]: Reached target default.target - Main User Target. May 16 00:51:52.326476 systemd[2804]: Startup finished in 124ms. May 16 00:51:52.326915 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:51:52.332911 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:51:52.489674 login[2749]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 16 00:51:52.491041 login[2750]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:52.494054 systemd-logind[2644]: New session 2 of user core. May 16 00:51:52.510266 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:51:52.563549 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 16 00:51:52.563982 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:51:52.564663 systemd[1]: Startup finished in 3.226s (kernel) + 19.735s (initrd) + 9.471s (userspace) = 32.432s. May 16 00:51:52.651519 systemd[1]: Started sshd@1-147.28.151.230:22-139.178.89.65:43744.service - OpenSSH per-connection server daemon (139.178.89.65:43744). May 16 00:51:53.085457 sshd[2838]: Accepted publickey for core from 139.178.89.65 port 43744 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:53.086490 sshd-session[2838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:53.089321 systemd-logind[2644]: New session 4 of user core. May 16 00:51:53.103257 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:51:53.390292 sshd[2841]: Connection closed by 139.178.89.65 port 43744 May 16 00:51:53.390565 sshd-session[2838]: pam_unix(sshd:session): session closed for user core May 16 00:51:53.393202 systemd[1]: sshd@1-147.28.151.230:22-139.178.89.65:43744.service: Deactivated successfully. May 16 00:51:53.395732 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:51:53.396228 systemd-logind[2644]: Session 4 logged out. Waiting for processes to exit. May 16 00:51:53.396722 systemd-logind[2644]: Removed session 4. May 16 00:51:53.465433 systemd[1]: Started sshd@2-147.28.151.230:22-139.178.89.65:43752.service - OpenSSH per-connection server daemon (139.178.89.65:43752). May 16 00:51:53.490033 login[2749]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:53.492690 systemd-logind[2644]: New session 3 of user core. May 16 00:51:53.505325 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:51:53.905943 sshd[2846]: Accepted publickey for core from 139.178.89.65 port 43752 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:53.907025 sshd-session[2846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:53.909609 systemd-logind[2644]: New session 5 of user core. May 16 00:51:53.922255 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:51:54.208409 sshd[2858]: Connection closed by 139.178.89.65 port 43752 May 16 00:51:54.208947 sshd-session[2846]: pam_unix(sshd:session): session closed for user core May 16 00:51:54.212463 systemd[1]: sshd@2-147.28.151.230:22-139.178.89.65:43752.service: Deactivated successfully. May 16 00:51:54.214635 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:51:54.215130 systemd-logind[2644]: Session 5 logged out. Waiting for processes to exit. May 16 00:51:54.215685 systemd-logind[2644]: Removed session 5. May 16 00:51:54.282410 systemd[1]: Started sshd@3-147.28.151.230:22-139.178.89.65:43756.service - OpenSSH per-connection server daemon (139.178.89.65:43756). May 16 00:51:54.717934 sshd[2863]: Accepted publickey for core from 139.178.89.65 port 43756 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:54.718933 sshd-session[2863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:54.721602 systemd-logind[2644]: New session 6 of user core. May 16 00:51:54.734305 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:51:55.021360 sshd[2865]: Connection closed by 139.178.89.65 port 43756 May 16 00:51:55.021760 sshd-session[2863]: pam_unix(sshd:session): session closed for user core May 16 00:51:55.024777 systemd[1]: sshd@3-147.28.151.230:22-139.178.89.65:43756.service: Deactivated successfully. May 16 00:51:55.026338 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:51:55.026796 systemd-logind[2644]: Session 6 logged out. Waiting for processes to exit. May 16 00:51:55.027320 systemd-logind[2644]: Removed session 6. May 16 00:51:55.100217 systemd[1]: Started sshd@4-147.28.151.230:22-139.178.89.65:43770.service - OpenSSH per-connection server daemon (139.178.89.65:43770). May 16 00:51:55.540716 sshd[2870]: Accepted publickey for core from 139.178.89.65 port 43770 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:55.541699 sshd-session[2870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:55.544424 systemd-logind[2644]: New session 7 of user core. May 16 00:51:55.554316 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:51:55.788059 sudo[2873]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:51:55.788334 sudo[2873]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:51:55.804957 sudo[2873]: pam_unix(sudo:session): session closed for user root May 16 00:51:55.870691 sshd[2872]: Connection closed by 139.178.89.65 port 43770 May 16 00:51:55.871296 sshd-session[2870]: pam_unix(sshd:session): session closed for user core May 16 00:51:55.875325 systemd[1]: sshd@4-147.28.151.230:22-139.178.89.65:43770.service: Deactivated successfully. May 16 00:51:55.877088 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:51:55.877636 systemd-logind[2644]: Session 7 logged out. Waiting for processes to exit. May 16 00:51:55.878258 systemd-logind[2644]: Removed session 7. May 16 00:51:55.938514 systemd[1]: Started sshd@5-147.28.151.230:22-139.178.89.65:43774.service - OpenSSH per-connection server daemon (139.178.89.65:43774). May 16 00:51:56.351846 sshd[2878]: Accepted publickey for core from 139.178.89.65 port 43774 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:56.352949 sshd-session[2878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:56.355780 systemd-logind[2644]: New session 8 of user core. May 16 00:51:56.366320 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:51:56.579825 sudo[2882]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:51:56.580075 sudo[2882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:51:56.582498 sudo[2882]: pam_unix(sudo:session): session closed for user root May 16 00:51:56.586724 sudo[2881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:51:56.586971 sudo[2881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:51:56.612445 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:51:56.634385 augenrules[2904]: No rules May 16 00:51:56.635506 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:51:56.636258 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:51:56.637114 sudo[2881]: pam_unix(sudo:session): session closed for user root May 16 00:51:56.698213 sshd[2880]: Connection closed by 139.178.89.65 port 43774 May 16 00:51:56.698593 sshd-session[2878]: pam_unix(sshd:session): session closed for user core May 16 00:51:56.701398 systemd[1]: sshd@5-147.28.151.230:22-139.178.89.65:43774.service: Deactivated successfully. May 16 00:51:56.703643 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:51:56.704170 systemd-logind[2644]: Session 8 logged out. Waiting for processes to exit. May 16 00:51:56.704920 systemd-logind[2644]: Removed session 8. May 16 00:51:56.773356 systemd[1]: Started sshd@6-147.28.151.230:22-139.178.89.65:43790.service - OpenSSH per-connection server daemon (139.178.89.65:43790). May 16 00:51:57.209699 sshd[2912]: Accepted publickey for core from 139.178.89.65 port 43790 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 00:51:57.210678 sshd-session[2912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:51:57.213521 systemd-logind[2644]: New session 9 of user core. May 16 00:51:57.229305 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:51:57.449549 sudo[2915]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:51:57.449814 sudo[2915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:51:57.710449 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:51:57.710514 (dockerd)[2945]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:51:57.918734 dockerd[2945]: time="2025-05-16T00:51:57.918687440Z" level=info msg="Starting up" May 16 00:51:57.979728 dockerd[2945]: time="2025-05-16T00:51:57.979661360Z" level=info msg="Loading containers: start." May 16 00:51:58.116163 kernel: Initializing XFRM netlink socket May 16 00:51:58.134090 systemd-timesyncd[2568]: Network configuration changed, trying to establish connection. May 16 00:51:58.191948 systemd-networkd[2566]: docker0: Link UP May 16 00:51:58.227351 dockerd[2945]: time="2025-05-16T00:51:58.227317440Z" level=info msg="Loading containers: done." May 16 00:51:58.236052 dockerd[2945]: time="2025-05-16T00:51:58.235998280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:51:58.236122 dockerd[2945]: time="2025-05-16T00:51:58.236066880Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 16 00:51:58.236186 dockerd[2945]: time="2025-05-16T00:51:58.236171360Z" level=info msg="Daemon has completed initialization" May 16 00:51:58.254919 dockerd[2945]: time="2025-05-16T00:51:58.254859640Z" level=info msg="API listen on /run/docker.sock" May 16 00:51:58.254987 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:51:58.969765 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck651297212-merged.mount: Deactivated successfully. May 16 00:51:59.212349 containerd[2665]: time="2025-05-16T00:51:59.212316600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 00:51:59.745786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800175197.mount: Deactivated successfully. May 16 00:51:59.491018 systemd-resolved[2567]: Clock change detected. Flushing caches. May 16 00:51:59.512157 systemd-journald[2193]: Time jumped backwards, rotating. May 16 00:51:59.491243 systemd-timesyncd[2568]: Contacted time server [2604:a880:1:20::17:5001]:123 (2.flatcar.pool.ntp.org). May 16 00:51:59.491290 systemd-timesyncd[2568]: Initial clock synchronization to Fri 2025-05-16 00:51:59.490954 UTC. May 16 00:52:00.713854 containerd[2665]: time="2025-05-16T00:52:00.713809774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:00.714125 containerd[2665]: time="2025-05-16T00:52:00.713826934Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326311" May 16 00:52:00.714903 containerd[2665]: time="2025-05-16T00:52:00.714879934Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:00.717795 containerd[2665]: time="2025-05-16T00:52:00.717770054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:00.718856 containerd[2665]: time="2025-05-16T00:52:00.718825614Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 2.16088116s" May 16 00:52:00.718888 containerd[2665]: time="2025-05-16T00:52:00.718864294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 16 00:52:00.719401 containerd[2665]: time="2025-05-16T00:52:00.719379134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 00:52:01.114302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:52:01.125890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:01.234112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:01.237522 (kubelet)[3251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:52:01.272226 kubelet[3251]: E0516 00:52:01.272192 3251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:52:01.274974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:52:01.275125 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:52:02.339831 containerd[2665]: time="2025-05-16T00:52:02.339792134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:02.340040 containerd[2665]: time="2025-05-16T00:52:02.339825454Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530547" May 16 00:52:02.340912 containerd[2665]: time="2025-05-16T00:52:02.340883214Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:02.343661 containerd[2665]: time="2025-05-16T00:52:02.343637374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:02.344785 containerd[2665]: time="2025-05-16T00:52:02.344764614Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.62533936s" May 16 00:52:02.344804 containerd[2665]: time="2025-05-16T00:52:02.344792654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 16 00:52:02.345295 containerd[2665]: time="2025-05-16T00:52:02.345272774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 00:52:03.646518 containerd[2665]: time="2025-05-16T00:52:03.646482094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:03.646807 containerd[2665]: time="2025-05-16T00:52:03.646501174Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484190" May 16 00:52:03.647483 containerd[2665]: time="2025-05-16T00:52:03.647460934Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:03.650367 containerd[2665]: time="2025-05-16T00:52:03.650340374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:03.651462 containerd[2665]: time="2025-05-16T00:52:03.651438334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.30612984s" May 16 00:52:03.651480 containerd[2665]: time="2025-05-16T00:52:03.651468654Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 16 00:52:03.651871 containerd[2665]: time="2025-05-16T00:52:03.651851814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:52:04.701670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788153503.mount: Deactivated successfully. May 16 00:52:04.884353 containerd[2665]: time="2025-05-16T00:52:04.884279254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:04.884658 containerd[2665]: time="2025-05-16T00:52:04.884296894Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377375" May 16 00:52:04.885104 containerd[2665]: time="2025-05-16T00:52:04.885059214Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:04.887168 containerd[2665]: time="2025-05-16T00:52:04.887145774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:04.887709 containerd[2665]: time="2025-05-16T00:52:04.887684334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.23580076s" May 16 00:52:04.887738 containerd[2665]: time="2025-05-16T00:52:04.887715374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 16 00:52:04.888092 containerd[2665]: time="2025-05-16T00:52:04.888074014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:52:05.280530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190191240.mount: Deactivated successfully. May 16 00:52:05.871318 containerd[2665]: time="2025-05-16T00:52:05.871212214Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 16 00:52:05.871318 containerd[2665]: time="2025-05-16T00:52:05.871217934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:05.872436 containerd[2665]: time="2025-05-16T00:52:05.872387894Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:05.875669 containerd[2665]: time="2025-05-16T00:52:05.875645014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:05.876724 containerd[2665]: time="2025-05-16T00:52:05.876706374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 988.60476ms" May 16 00:52:05.876768 containerd[2665]: time="2025-05-16T00:52:05.876730014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 00:52:05.877596 containerd[2665]: time="2025-05-16T00:52:05.877468614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:52:06.135932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303062561.mount: Deactivated successfully. May 16 00:52:06.136358 containerd[2665]: time="2025-05-16T00:52:06.136328054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:06.136505 containerd[2665]: time="2025-05-16T00:52:06.136385694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 16 00:52:06.137091 containerd[2665]: time="2025-05-16T00:52:06.137070734Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:06.139215 containerd[2665]: time="2025-05-16T00:52:06.139193934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:06.140016 containerd[2665]: time="2025-05-16T00:52:06.139990374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 262.48308ms" May 16 00:52:06.140042 containerd[2665]: time="2025-05-16T00:52:06.140020814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 00:52:06.140568 containerd[2665]: time="2025-05-16T00:52:06.140367854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 00:52:06.474354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2472465187.mount: Deactivated successfully. May 16 00:52:09.614732 containerd[2665]: time="2025-05-16T00:52:09.614690294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:09.615110 containerd[2665]: time="2025-05-16T00:52:09.614718054Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 16 00:52:09.615896 containerd[2665]: time="2025-05-16T00:52:09.615872414Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:09.619016 containerd[2665]: time="2025-05-16T00:52:09.618992534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:09.620343 containerd[2665]: time="2025-05-16T00:52:09.620297934Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.47988912s" May 16 00:52:09.620401 containerd[2665]: time="2025-05-16T00:52:09.620353494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 16 00:52:11.364257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:52:11.373877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:11.478959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:11.482626 (kubelet)[3474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:52:11.513138 kubelet[3474]: E0516 00:52:11.513103 3474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:52:11.515353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:52:11.515494 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:52:15.761159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:15.772043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:15.793877 systemd[1]: Reloading requested from client PID 3507 ('systemctl') (unit session-9.scope)... May 16 00:52:15.793888 systemd[1]: Reloading... May 16 00:52:15.857746 zram_generator::config[3551]: No configuration found. May 16 00:52:15.946324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:52:16.017559 systemd[1]: Reloading finished in 223 ms. May 16 00:52:16.066368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:16.068612 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:52:16.068813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:16.070305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:16.175162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:16.178860 (kubelet)[3614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:52:16.208780 kubelet[3614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:52:16.208780 kubelet[3614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:52:16.208780 kubelet[3614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:52:16.209062 kubelet[3614]: I0516 00:52:16.208838 3614 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:52:16.689196 kubelet[3614]: I0516 00:52:16.689057 3614 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:52:16.689196 kubelet[3614]: I0516 00:52:16.689192 3614 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:52:16.690535 kubelet[3614]: I0516 00:52:16.690511 3614 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:52:16.710026 kubelet[3614]: E0516 00:52:16.710001 3614 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.151.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:16.722396 kubelet[3614]: I0516 00:52:16.722379 3614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:52:16.725371 kubelet[3614]: E0516 00:52:16.725349 3614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:52:16.725397 kubelet[3614]: I0516 00:52:16.725370 3614 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:52:16.744713 kubelet[3614]: I0516 00:52:16.744687 3614 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:52:16.744919 kubelet[3614]: I0516 00:52:16.744893 3614 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:52:16.745067 kubelet[3614]: I0516 00:52:16.744918 3614 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-n-16e7659192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:52:16.745151 kubelet[3614]: I0516 00:52:16.745140 3614 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:52:16.745151 kubelet[3614]: I0516 00:52:16.745150 3614 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:52:16.745357 kubelet[3614]: I0516 00:52:16.745346 3614 state_mem.go:36] "Initialized new in-memory state store" May 16 00:52:16.748637 kubelet[3614]: I0516 00:52:16.748620 3614 kubelet.go:446] "Attempting to sync node with API server" May 16 00:52:16.748664 kubelet[3614]: I0516 00:52:16.748650 3614 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:52:16.748681 kubelet[3614]: I0516 00:52:16.748667 3614 kubelet.go:352] "Adding apiserver pod source" May 16 00:52:16.748681 kubelet[3614]: I0516 00:52:16.748679 3614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:52:16.752887 kubelet[3614]: I0516 00:52:16.752857 3614 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:52:16.753316 kubelet[3614]: W0516 00:52:16.753267 3614 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-16e7659192&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 16 00:52:16.753359 kubelet[3614]: E0516 00:52:16.753327 3614 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-n-16e7659192&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:16.753359 kubelet[3614]: W0516 00:52:16.753276 3614 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 16 00:52:16.753423 kubelet[3614]: E0516 00:52:16.753356 3614 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:16.753423 kubelet[3614]: I0516 00:52:16.753404 3614 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:52:16.753538 kubelet[3614]: W0516 00:52:16.753517 3614 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:52:16.754293 kubelet[3614]: I0516 00:52:16.754282 3614 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:52:16.754331 kubelet[3614]: I0516 00:52:16.754310 3614 server.go:1287] "Started kubelet" May 16 00:52:16.754883 kubelet[3614]: I0516 00:52:16.754386 3614 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:52:16.756411 kubelet[3614]: I0516 00:52:16.756395 3614 server.go:479] "Adding debug handlers to kubelet server" May 16 00:52:16.756511 kubelet[3614]: I0516 00:52:16.756457 3614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:52:16.756573 kubelet[3614]: I0516 00:52:16.756552 3614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:52:16.758072 kubelet[3614]: I0516 00:52:16.756555 3614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:52:16.758709 kubelet[3614]: I0516 00:52:16.758513 3614 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:52:16.758799 kubelet[3614]: I0516 00:52:16.758683 3614 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:52:16.758850 kubelet[3614]: I0516 00:52:16.758781 3614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:52:16.758893 kubelet[3614]: I0516 00:52:16.758881 3614 reconciler.go:26] "Reconciler: start to sync state" May 16 00:52:16.758930 kubelet[3614]: E0516 00:52:16.758896 3614 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-16e7659192\" not found" May 16 00:52:16.759162 kubelet[3614]: E0516 00:52:16.759105 3614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-16e7659192?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="200ms" May 16 00:52:16.760383 kubelet[3614]: W0516 00:52:16.760329 3614 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 16 00:52:16.760383 kubelet[3614]: E0516 00:52:16.760379 3614 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:16.760651 kubelet[3614]: E0516 00:52:16.760232 3614 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.151.230:6443/api/v1/namespaces/default/events\": dial tcp 147.28.151.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-n-16e7659192.183fdbb1b62018ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-n-16e7659192,UID:ci-4152.2.3-n-16e7659192,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-n-16e7659192,},FirstTimestamp:2025-05-16 00:52:16.754292974 +0000 UTC m=+0.572525201,LastTimestamp:2025-05-16 00:52:16.754292974 +0000 UTC m=+0.572525201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-n-16e7659192,}" May 16 00:52:16.760651 kubelet[3614]: I0516 00:52:16.760587 3614 factory.go:221] Registration of the systemd container factory successfully May 16 00:52:16.760734 kubelet[3614]: I0516 00:52:16.760687 3614 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:52:16.760767 kubelet[3614]: E0516 00:52:16.760733 3614 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:52:16.761361 kubelet[3614]: I0516 00:52:16.761342 3614 factory.go:221] Registration of the containerd container factory successfully May 16 00:52:16.771910 kubelet[3614]: I0516 00:52:16.771875 3614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:52:16.772886 kubelet[3614]: I0516 00:52:16.772875 3614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:52:16.772913 kubelet[3614]: I0516 00:52:16.772893 3614 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:52:16.772913 kubelet[3614]: I0516 00:52:16.772910 3614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:52:16.772955 kubelet[3614]: I0516 00:52:16.772916 3614 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:52:16.772973 kubelet[3614]: E0516 00:52:16.772953 3614 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:52:16.773298 kubelet[3614]: W0516 00:52:16.773262 3614 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 16 00:52:16.773332 kubelet[3614]: E0516 00:52:16.773312 3614 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:16.775375 kubelet[3614]: I0516 00:52:16.775356 3614 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:52:16.775375 kubelet[3614]: I0516 00:52:16.775371 3614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:52:16.775473 kubelet[3614]: I0516 00:52:16.775389 3614 state_mem.go:36] "Initialized new in-memory state store" May 16 00:52:16.776008 kubelet[3614]: I0516 00:52:16.775995 3614 policy_none.go:49] "None policy: Start" May 16 00:52:16.776028 kubelet[3614]: I0516 00:52:16.776012 3614 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:52:16.776028 kubelet[3614]: I0516 00:52:16.776022 3614 state_mem.go:35] "Initializing new in-memory state store" May 16 00:52:16.779360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:52:16.797808 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:52:16.801103 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:52:16.815553 kubelet[3614]: I0516 00:52:16.815531 3614 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:52:16.815759 kubelet[3614]: I0516 00:52:16.815735 3614 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:52:16.815791 kubelet[3614]: I0516 00:52:16.815760 3614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:52:16.815979 kubelet[3614]: I0516 00:52:16.815941 3614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:52:16.816375 kubelet[3614]: E0516 00:52:16.816354 3614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:52:16.816415 kubelet[3614]: E0516 00:52:16.816395 3614 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.3-n-16e7659192\" not found" May 16 00:52:16.881603 systemd[1]: Created slice kubepods-burstable-pod30e0aa8b6ce1a90992bc245b5c2329cb.slice - libcontainer container kubepods-burstable-pod30e0aa8b6ce1a90992bc245b5c2329cb.slice. May 16 00:52:16.905016 kubelet[3614]: E0516 00:52:16.904994 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:16.907150 systemd[1]: Created slice kubepods-burstable-pod458ccfb1b7d7b5921f653c9982cc656b.slice - libcontainer container kubepods-burstable-pod458ccfb1b7d7b5921f653c9982cc656b.slice. May 16 00:52:16.918192 kubelet[3614]: I0516 00:52:16.918176 3614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:16.918556 kubelet[3614]: E0516 00:52:16.918532 3614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4152.2.3-n-16e7659192" May 16 00:52:16.922758 kubelet[3614]: E0516 00:52:16.922719 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:16.924712 systemd[1]: Created slice kubepods-burstable-pod84a697fcfc8786542357ff7f1495dff4.slice - libcontainer container kubepods-burstable-pod84a697fcfc8786542357ff7f1495dff4.slice. May 16 00:52:16.925880 kubelet[3614]: E0516 00:52:16.925861 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:16.959213 kubelet[3614]: I0516 00:52:16.959147 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959213 kubelet[3614]: I0516 00:52:16.959174 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84a697fcfc8786542357ff7f1495dff4-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-n-16e7659192\" (UID: \"84a697fcfc8786542357ff7f1495dff4\") " pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959213 kubelet[3614]: I0516 00:52:16.959191 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30e0aa8b6ce1a90992bc245b5c2329cb-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-n-16e7659192\" (UID: \"30e0aa8b6ce1a90992bc245b5c2329cb\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959213 kubelet[3614]: I0516 00:52:16.959207 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959370 kubelet[3614]: I0516 00:52:16.959225 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959370 kubelet[3614]: I0516 00:52:16.959258 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959370 kubelet[3614]: I0516 00:52:16.959293 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30e0aa8b6ce1a90992bc245b5c2329cb-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-n-16e7659192\" (UID: \"30e0aa8b6ce1a90992bc245b5c2329cb\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959370 kubelet[3614]: I0516 00:52:16.959313 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30e0aa8b6ce1a90992bc245b5c2329cb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-n-16e7659192\" (UID: \"30e0aa8b6ce1a90992bc245b5c2329cb\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959370 kubelet[3614]: I0516 00:52:16.959332 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:16.959520 kubelet[3614]: E0516 00:52:16.959493 3614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-16e7659192?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="400ms" May 16 00:52:17.121548 kubelet[3614]: I0516 00:52:17.121522 3614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:17.121885 kubelet[3614]: E0516 00:52:17.121853 3614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4152.2.3-n-16e7659192" May 16 00:52:17.206175 containerd[2665]: time="2025-05-16T00:52:17.206087054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-n-16e7659192,Uid:30e0aa8b6ce1a90992bc245b5c2329cb,Namespace:kube-system,Attempt:0,}" May 16 00:52:17.223679 containerd[2665]: time="2025-05-16T00:52:17.223592014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-n-16e7659192,Uid:458ccfb1b7d7b5921f653c9982cc656b,Namespace:kube-system,Attempt:0,}" May 16 00:52:17.227143 containerd[2665]: time="2025-05-16T00:52:17.227110694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-n-16e7659192,Uid:84a697fcfc8786542357ff7f1495dff4,Namespace:kube-system,Attempt:0,}" May 16 00:52:17.360256 kubelet[3614]: E0516 00:52:17.360216 3614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-n-16e7659192?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="800ms" May 16 00:52:17.500296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498660565.mount: Deactivated successfully. May 16 00:52:17.500921 containerd[2665]: time="2025-05-16T00:52:17.500887934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:52:17.501303 containerd[2665]: time="2025-05-16T00:52:17.501270134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 16 00:52:17.501713 containerd[2665]: time="2025-05-16T00:52:17.501687014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:52:17.502132 containerd[2665]: time="2025-05-16T00:52:17.502104574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:52:17.502445 containerd[2665]: time="2025-05-16T00:52:17.502423974Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:52:17.502563 containerd[2665]: time="2025-05-16T00:52:17.502530174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:52:17.504533 containerd[2665]: time="2025-05-16T00:52:17.504500734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:52:17.507373 containerd[2665]: time="2025-05-16T00:52:17.507346214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 301.17952ms" May 16 00:52:17.507846 containerd[2665]: time="2025-05-16T00:52:17.507822094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:52:17.508614 containerd[2665]: time="2025-05-16T00:52:17.508595454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 284.92556ms" May 16 00:52:17.510440 containerd[2665]: time="2025-05-16T00:52:17.510410934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 283.22252ms" May 16 00:52:17.524360 kubelet[3614]: I0516 00:52:17.524337 3614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:17.524606 kubelet[3614]: E0516 00:52:17.524586 3614 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4152.2.3-n-16e7659192" May 16 00:52:17.617497 containerd[2665]: time="2025-05-16T00:52:17.617413014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:17.617525 containerd[2665]: time="2025-05-16T00:52:17.617495854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:17.617525 containerd[2665]: time="2025-05-16T00:52:17.617510174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:17.617612 containerd[2665]: time="2025-05-16T00:52:17.617593054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:17.618921 containerd[2665]: time="2025-05-16T00:52:17.618598934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:17.618941 containerd[2665]: time="2025-05-16T00:52:17.618923934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:17.618963 containerd[2665]: time="2025-05-16T00:52:17.618936974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:17.618983 containerd[2665]: time="2025-05-16T00:52:17.618919534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:17.619001 containerd[2665]: time="2025-05-16T00:52:17.618982374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:17.619019 containerd[2665]: time="2025-05-16T00:52:17.618996614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:17.619036 containerd[2665]: time="2025-05-16T00:52:17.619016454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:17.619097 containerd[2665]: time="2025-05-16T00:52:17.619075774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:17.652861 systemd[1]: Started cri-containerd-3dcd427e78a0386b1d13c1bdae86069b9d0e6e45054ce7166ba0615ed54ae65b.scope - libcontainer container 3dcd427e78a0386b1d13c1bdae86069b9d0e6e45054ce7166ba0615ed54ae65b. May 16 00:52:17.654128 systemd[1]: Started cri-containerd-715102f94a3ecc3aba0b3ac6fca6fb4c837c752b446d539a77662abac6e41809.scope - libcontainer container 715102f94a3ecc3aba0b3ac6fca6fb4c837c752b446d539a77662abac6e41809. May 16 00:52:17.654816 kubelet[3614]: E0516 00:52:17.654728 3614 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.151.230:6443/api/v1/namespaces/default/events\": dial tcp 147.28.151.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-n-16e7659192.183fdbb1b62018ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-n-16e7659192,UID:ci-4152.2.3-n-16e7659192,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-n-16e7659192,},FirstTimestamp:2025-05-16 00:52:16.754292974 +0000 UTC m=+0.572525201,LastTimestamp:2025-05-16 00:52:16.754292974 +0000 UTC m=+0.572525201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-n-16e7659192,}" May 16 00:52:17.655436 systemd[1]: Started cri-containerd-91e79e4ff30bc10c72f7093f06c0e5e1dd38ee08ac37713c164c1e5c98f12122.scope - libcontainer container 91e79e4ff30bc10c72f7093f06c0e5e1dd38ee08ac37713c164c1e5c98f12122. May 16 00:52:17.658232 kubelet[3614]: W0516 00:52:17.658187 3614 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 16 00:52:17.658282 kubelet[3614]: E0516 00:52:17.658246 3614 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:17.676644 containerd[2665]: time="2025-05-16T00:52:17.676616574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-n-16e7659192,Uid:84a697fcfc8786542357ff7f1495dff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dcd427e78a0386b1d13c1bdae86069b9d0e6e45054ce7166ba0615ed54ae65b\"" May 16 00:52:17.677218 containerd[2665]: time="2025-05-16T00:52:17.677201174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-n-16e7659192,Uid:458ccfb1b7d7b5921f653c9982cc656b,Namespace:kube-system,Attempt:0,} returns sandbox id \"715102f94a3ecc3aba0b3ac6fca6fb4c837c752b446d539a77662abac6e41809\"" May 16 00:52:17.678145 containerd[2665]: time="2025-05-16T00:52:17.678112174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-n-16e7659192,Uid:30e0aa8b6ce1a90992bc245b5c2329cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"91e79e4ff30bc10c72f7093f06c0e5e1dd38ee08ac37713c164c1e5c98f12122\"" May 16 00:52:17.678929 containerd[2665]: time="2025-05-16T00:52:17.678908614Z" level=info msg="CreateContainer within sandbox \"715102f94a3ecc3aba0b3ac6fca6fb4c837c752b446d539a77662abac6e41809\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:52:17.678979 containerd[2665]: time="2025-05-16T00:52:17.678958054Z" level=info msg="CreateContainer within sandbox \"3dcd427e78a0386b1d13c1bdae86069b9d0e6e45054ce7166ba0615ed54ae65b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:52:17.679585 containerd[2665]: time="2025-05-16T00:52:17.679564614Z" level=info msg="CreateContainer within sandbox \"91e79e4ff30bc10c72f7093f06c0e5e1dd38ee08ac37713c164c1e5c98f12122\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:52:17.687206 containerd[2665]: time="2025-05-16T00:52:17.687172814Z" level=info msg="CreateContainer within sandbox \"3dcd427e78a0386b1d13c1bdae86069b9d0e6e45054ce7166ba0615ed54ae65b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25507df0a35b54a27b039252e41e17bbe68f93f498dfa081deb2a6d09fa6b098\"" May 16 00:52:17.688531 containerd[2665]: time="2025-05-16T00:52:17.688497254Z" level=info msg="StartContainer for \"25507df0a35b54a27b039252e41e17bbe68f93f498dfa081deb2a6d09fa6b098\"" May 16 00:52:17.688692 containerd[2665]: time="2025-05-16T00:52:17.688665254Z" level=info msg="CreateContainer within sandbox \"91e79e4ff30bc10c72f7093f06c0e5e1dd38ee08ac37713c164c1e5c98f12122\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b653685c8ce696a52dcd06c0d3fb0779f5c7f2d0885fbcdad9fc20e86d369eaf\"" May 16 00:52:17.688756 containerd[2665]: time="2025-05-16T00:52:17.688727854Z" level=info msg="CreateContainer within sandbox \"715102f94a3ecc3aba0b3ac6fca6fb4c837c752b446d539a77662abac6e41809\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ccbf4b1060f24c7a6a45fb81f1eaaff69d87790cdcb8b57efaf4e155cecdf299\"" May 16 00:52:17.689301 containerd[2665]: time="2025-05-16T00:52:17.689283614Z" level=info msg="StartContainer for \"ccbf4b1060f24c7a6a45fb81f1eaaff69d87790cdcb8b57efaf4e155cecdf299\"" May 16 00:52:17.689369 containerd[2665]: time="2025-05-16T00:52:17.689280974Z" level=info msg="StartContainer for \"b653685c8ce696a52dcd06c0d3fb0779f5c7f2d0885fbcdad9fc20e86d369eaf\"" May 16 00:52:17.724921 systemd[1]: Started cri-containerd-25507df0a35b54a27b039252e41e17bbe68f93f498dfa081deb2a6d09fa6b098.scope - libcontainer container 25507df0a35b54a27b039252e41e17bbe68f93f498dfa081deb2a6d09fa6b098. May 16 00:52:17.726055 systemd[1]: Started cri-containerd-b653685c8ce696a52dcd06c0d3fb0779f5c7f2d0885fbcdad9fc20e86d369eaf.scope - libcontainer container b653685c8ce696a52dcd06c0d3fb0779f5c7f2d0885fbcdad9fc20e86d369eaf. May 16 00:52:17.727129 systemd[1]: Started cri-containerd-ccbf4b1060f24c7a6a45fb81f1eaaff69d87790cdcb8b57efaf4e155cecdf299.scope - libcontainer container ccbf4b1060f24c7a6a45fb81f1eaaff69d87790cdcb8b57efaf4e155cecdf299. May 16 00:52:17.737949 kubelet[3614]: W0516 00:52:17.737900 3614 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 16 00:52:17.737998 kubelet[3614]: E0516 00:52:17.737965 3614 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 16 00:52:17.749178 containerd[2665]: time="2025-05-16T00:52:17.749136574Z" level=info msg="StartContainer for \"25507df0a35b54a27b039252e41e17bbe68f93f498dfa081deb2a6d09fa6b098\" returns successfully" May 16 00:52:17.749985 containerd[2665]: time="2025-05-16T00:52:17.749951254Z" level=info msg="StartContainer for \"b653685c8ce696a52dcd06c0d3fb0779f5c7f2d0885fbcdad9fc20e86d369eaf\" returns successfully" May 16 00:52:17.752189 containerd[2665]: time="2025-05-16T00:52:17.752133894Z" level=info msg="StartContainer for \"ccbf4b1060f24c7a6a45fb81f1eaaff69d87790cdcb8b57efaf4e155cecdf299\" returns successfully" May 16 00:52:17.777934 kubelet[3614]: E0516 00:52:17.777906 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:17.778728 kubelet[3614]: E0516 00:52:17.778709 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:17.778848 kubelet[3614]: E0516 00:52:17.778827 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:18.327561 kubelet[3614]: I0516 00:52:18.326953 3614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:18.781109 kubelet[3614]: E0516 00:52:18.781080 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:18.781395 kubelet[3614]: E0516 00:52:18.781184 3614 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:19.062000 kubelet[3614]: E0516 00:52:19.061325 3614 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.3-n-16e7659192\" not found" node="ci-4152.2.3-n-16e7659192" May 16 00:52:19.165311 kubelet[3614]: I0516 00:52:19.165284 3614 kubelet_node_status.go:78] "Successfully registered node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:19.259499 kubelet[3614]: I0516 00:52:19.259466 3614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:19.264319 kubelet[3614]: E0516 00:52:19.264137 3614 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:19.264319 kubelet[3614]: I0516 00:52:19.264160 3614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:19.265409 kubelet[3614]: E0516 00:52:19.265392 3614 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152.2.3-n-16e7659192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:19.265409 kubelet[3614]: I0516 00:52:19.265409 3614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:19.266786 kubelet[3614]: E0516 00:52:19.266767 3614 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152.2.3-n-16e7659192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:19.752537 kubelet[3614]: I0516 00:52:19.752502 3614 apiserver.go:52] "Watching apiserver" May 16 00:52:19.759612 kubelet[3614]: I0516 00:52:19.759583 3614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:52:21.122981 systemd[1]: Reloading requested from client PID 4036 ('systemctl') (unit session-9.scope)... May 16 00:52:21.122991 systemd[1]: Reloading... May 16 00:52:21.189768 zram_generator::config[4078]: No configuration found. May 16 00:52:21.278938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:52:21.360417 systemd[1]: Reloading finished in 237 ms. May 16 00:52:21.392459 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:21.411623 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:52:21.411915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:21.411960 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 154.0M memory peak, 0B memory swap peak. May 16 00:52:21.428016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:52:21.529849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:52:21.533442 (kubelet)[4137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:52:21.563114 kubelet[4137]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:52:21.563114 kubelet[4137]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:52:21.563114 kubelet[4137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:52:21.563291 kubelet[4137]: I0516 00:52:21.563174 4137 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:52:21.568194 kubelet[4137]: I0516 00:52:21.568175 4137 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:52:21.568220 kubelet[4137]: I0516 00:52:21.568195 4137 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:52:21.568451 kubelet[4137]: I0516 00:52:21.568442 4137 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:52:21.569617 kubelet[4137]: I0516 00:52:21.569606 4137 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:52:21.571758 kubelet[4137]: I0516 00:52:21.571734 4137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:52:21.575138 kubelet[4137]: E0516 00:52:21.575110 4137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:52:21.575162 kubelet[4137]: I0516 00:52:21.575142 4137 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:52:21.592718 kubelet[4137]: I0516 00:52:21.592700 4137 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:52:21.592928 kubelet[4137]: I0516 00:52:21.592902 4137 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:52:21.593086 kubelet[4137]: I0516 00:52:21.592927 4137 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-n-16e7659192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:52:21.593156 kubelet[4137]: I0516 00:52:21.593095 4137 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:52:21.593156 kubelet[4137]: I0516 00:52:21.593104 4137 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:52:21.593199 kubelet[4137]: I0516 00:52:21.593168 4137 state_mem.go:36] "Initialized new in-memory state store" May 16 00:52:21.593479 kubelet[4137]: I0516 00:52:21.593466 4137 kubelet.go:446] "Attempting to sync node with API server" May 16 00:52:21.593501 kubelet[4137]: I0516 00:52:21.593482 4137 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:52:21.593501 kubelet[4137]: I0516 00:52:21.593500 4137 kubelet.go:352] "Adding apiserver pod source" May 16 00:52:21.593536 kubelet[4137]: I0516 00:52:21.593510 4137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:52:21.595326 kubelet[4137]: I0516 00:52:21.595303 4137 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:52:21.595769 kubelet[4137]: I0516 00:52:21.595756 4137 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:52:21.596182 kubelet[4137]: I0516 00:52:21.596169 4137 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:52:21.596206 kubelet[4137]: I0516 00:52:21.596197 4137 server.go:1287] "Started kubelet" May 16 00:52:21.596344 kubelet[4137]: I0516 00:52:21.596314 4137 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:52:21.596382 kubelet[4137]: I0516 00:52:21.596340 4137 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:52:21.596572 kubelet[4137]: I0516 00:52:21.596558 4137 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:52:21.597296 kubelet[4137]: I0516 00:52:21.597279 4137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:52:21.597296 kubelet[4137]: I0516 00:52:21.597282 4137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:52:21.597349 kubelet[4137]: I0516 00:52:21.597335 4137 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:52:21.597371 kubelet[4137]: E0516 00:52:21.597349 4137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4152.2.3-n-16e7659192\" not found" May 16 00:52:21.597399 kubelet[4137]: I0516 00:52:21.597376 4137 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:52:21.597506 kubelet[4137]: I0516 00:52:21.597491 4137 reconciler.go:26] "Reconciler: start to sync state" May 16 00:52:21.597616 kubelet[4137]: E0516 00:52:21.597599 4137 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:52:21.597840 kubelet[4137]: I0516 00:52:21.597821 4137 factory.go:221] Registration of the systemd container factory successfully May 16 00:52:21.597934 kubelet[4137]: I0516 00:52:21.597917 4137 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:52:21.598229 kubelet[4137]: I0516 00:52:21.598215 4137 server.go:479] "Adding debug handlers to kubelet server" May 16 00:52:21.598460 kubelet[4137]: I0516 00:52:21.598447 4137 factory.go:221] Registration of the containerd container factory successfully May 16 00:52:21.604773 kubelet[4137]: I0516 00:52:21.604735 4137 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:52:21.605712 kubelet[4137]: I0516 00:52:21.605700 4137 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:52:21.605748 kubelet[4137]: I0516 00:52:21.605715 4137 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:52:21.605748 kubelet[4137]: I0516 00:52:21.605730 4137 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:52:21.605748 kubelet[4137]: I0516 00:52:21.605749 4137 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:52:21.605814 kubelet[4137]: E0516 00:52:21.605795 4137 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:52:21.638138 kubelet[4137]: I0516 00:52:21.638103 4137 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:52:21.638138 kubelet[4137]: I0516 00:52:21.638123 4137 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:52:21.638138 kubelet[4137]: I0516 00:52:21.638145 4137 state_mem.go:36] "Initialized new in-memory state store" May 16 00:52:21.638355 kubelet[4137]: I0516 00:52:21.638311 4137 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:52:21.638355 kubelet[4137]: I0516 00:52:21.638322 4137 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:52:21.638355 kubelet[4137]: I0516 00:52:21.638342 4137 policy_none.go:49] "None policy: Start" May 16 00:52:21.638355 kubelet[4137]: I0516 00:52:21.638355 4137 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:52:21.638478 kubelet[4137]: I0516 00:52:21.638364 4137 state_mem.go:35] "Initializing new in-memory state store" May 16 00:52:21.638478 kubelet[4137]: I0516 00:52:21.638457 4137 state_mem.go:75] "Updated machine memory state" May 16 00:52:21.641362 kubelet[4137]: I0516 00:52:21.641342 4137 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:52:21.641512 kubelet[4137]: I0516 00:52:21.641496 4137 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:52:21.641558 kubelet[4137]: I0516 00:52:21.641509 4137 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:52:21.641671 kubelet[4137]: I0516 00:52:21.641653 4137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:52:21.642055 kubelet[4137]: E0516 00:52:21.642042 4137 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:52:21.706557 kubelet[4137]: I0516 00:52:21.706510 4137 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:21.706557 kubelet[4137]: I0516 00:52:21.706552 4137 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:21.706635 kubelet[4137]: I0516 00:52:21.706618 4137 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:21.712322 kubelet[4137]: W0516 00:52:21.712296 4137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 00:52:21.712569 kubelet[4137]: W0516 00:52:21.712551 4137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 00:52:21.712636 kubelet[4137]: W0516 00:52:21.712620 4137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 00:52:21.744343 kubelet[4137]: I0516 00:52:21.744328 4137 kubelet_node_status.go:75] "Attempting to register node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:21.748124 kubelet[4137]: I0516 00:52:21.748103 4137 kubelet_node_status.go:124] "Node was previously registered" node="ci-4152.2.3-n-16e7659192" May 16 00:52:21.748171 kubelet[4137]: I0516 00:52:21.748158 4137 kubelet_node_status.go:78] "Successfully registered node" node="ci-4152.2.3-n-16e7659192" May 16 00:52:21.899199 kubelet[4137]: I0516 00:52:21.899176 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899232 kubelet[4137]: I0516 00:52:21.899207 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84a697fcfc8786542357ff7f1495dff4-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-n-16e7659192\" (UID: \"84a697fcfc8786542357ff7f1495dff4\") " pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899253 kubelet[4137]: I0516 00:52:21.899238 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30e0aa8b6ce1a90992bc245b5c2329cb-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-n-16e7659192\" (UID: \"30e0aa8b6ce1a90992bc245b5c2329cb\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899284 kubelet[4137]: I0516 00:52:21.899267 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30e0aa8b6ce1a90992bc245b5c2329cb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-n-16e7659192\" (UID: \"30e0aa8b6ce1a90992bc245b5c2329cb\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899362 kubelet[4137]: I0516 00:52:21.899336 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899414 kubelet[4137]: I0516 00:52:21.899401 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899437 kubelet[4137]: I0516 00:52:21.899419 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30e0aa8b6ce1a90992bc245b5c2329cb-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-n-16e7659192\" (UID: \"30e0aa8b6ce1a90992bc245b5c2329cb\") " pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899456 kubelet[4137]: I0516 00:52:21.899435 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:21.899478 kubelet[4137]: I0516 00:52:21.899452 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458ccfb1b7d7b5921f653c9982cc656b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-n-16e7659192\" (UID: \"458ccfb1b7d7b5921f653c9982cc656b\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" May 16 00:52:22.594630 kubelet[4137]: I0516 00:52:22.594595 4137 apiserver.go:52] "Watching apiserver" May 16 00:52:22.597986 kubelet[4137]: I0516 00:52:22.597967 4137 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:52:22.611195 kubelet[4137]: I0516 00:52:22.611176 4137 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:22.611271 kubelet[4137]: I0516 00:52:22.611256 4137 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:22.615102 kubelet[4137]: W0516 00:52:22.615085 4137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 00:52:22.615158 kubelet[4137]: E0516 00:52:22.615123 4137 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152.2.3-n-16e7659192\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" May 16 00:52:22.615220 kubelet[4137]: W0516 00:52:22.615204 4137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 00:52:22.615257 kubelet[4137]: E0516 00:52:22.615246 4137 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152.2.3-n-16e7659192\" already exists" pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" May 16 00:52:22.626698 kubelet[4137]: I0516 00:52:22.626661 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.3-n-16e7659192" podStartSLOduration=1.626650334 podStartE2EDuration="1.626650334s" podCreationTimestamp="2025-05-16 00:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:52:22.626492054 +0000 UTC m=+1.090239161" watchObservedRunningTime="2025-05-16 00:52:22.626650334 +0000 UTC m=+1.090397441" May 16 00:52:22.636980 kubelet[4137]: I0516 00:52:22.636940 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.3-n-16e7659192" podStartSLOduration=1.6369264540000001 podStartE2EDuration="1.636926454s" podCreationTimestamp="2025-05-16 00:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:52:22.631842374 +0000 UTC m=+1.095589481" watchObservedRunningTime="2025-05-16 00:52:22.636926454 +0000 UTC m=+1.100673561" May 16 00:52:22.637041 kubelet[4137]: I0516 00:52:22.637014 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.3-n-16e7659192" podStartSLOduration=1.637010254 podStartE2EDuration="1.637010254s" podCreationTimestamp="2025-05-16 00:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:52:22.636979734 +0000 UTC m=+1.100726841" watchObservedRunningTime="2025-05-16 00:52:22.637010254 +0000 UTC m=+1.100757361" May 16 00:52:27.988416 kubelet[4137]: I0516 00:52:27.988377 4137 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:52:27.988876 containerd[2665]: time="2025-05-16T00:52:27.988684574Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:52:27.989039 kubelet[4137]: I0516 00:52:27.988870 4137 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:52:28.834054 systemd[1]: Created slice kubepods-besteffort-pod3666d22e_e8db_4b5c_a075_b33f6efc2bfa.slice - libcontainer container kubepods-besteffort-pod3666d22e_e8db_4b5c_a075_b33f6efc2bfa.slice. May 16 00:52:28.840102 kubelet[4137]: I0516 00:52:28.840034 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3666d22e-e8db-4b5c-a075-b33f6efc2bfa-xtables-lock\") pod \"kube-proxy-rf2kx\" (UID: \"3666d22e-e8db-4b5c-a075-b33f6efc2bfa\") " pod="kube-system/kube-proxy-rf2kx" May 16 00:52:28.840102 kubelet[4137]: I0516 00:52:28.840077 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3666d22e-e8db-4b5c-a075-b33f6efc2bfa-lib-modules\") pod \"kube-proxy-rf2kx\" (UID: \"3666d22e-e8db-4b5c-a075-b33f6efc2bfa\") " pod="kube-system/kube-proxy-rf2kx" May 16 00:52:28.840102 kubelet[4137]: I0516 00:52:28.840095 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk52v\" (UniqueName: \"kubernetes.io/projected/3666d22e-e8db-4b5c-a075-b33f6efc2bfa-kube-api-access-pk52v\") pod \"kube-proxy-rf2kx\" (UID: \"3666d22e-e8db-4b5c-a075-b33f6efc2bfa\") " pod="kube-system/kube-proxy-rf2kx" May 16 00:52:28.840254 kubelet[4137]: I0516 00:52:28.840117 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3666d22e-e8db-4b5c-a075-b33f6efc2bfa-kube-proxy\") pod \"kube-proxy-rf2kx\" (UID: \"3666d22e-e8db-4b5c-a075-b33f6efc2bfa\") " pod="kube-system/kube-proxy-rf2kx" May 16 00:52:29.131878 systemd[1]: Created slice kubepods-besteffort-podc80d2412_4fee_4aa6_afef_310e325448cd.slice - libcontainer container kubepods-besteffort-podc80d2412_4fee_4aa6_afef_310e325448cd.slice. May 16 00:52:29.142543 kubelet[4137]: I0516 00:52:29.142506 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8dd\" (UniqueName: \"kubernetes.io/projected/c80d2412-4fee-4aa6-afef-310e325448cd-kube-api-access-8b8dd\") pod \"tigera-operator-844669ff44-5tdsk\" (UID: \"c80d2412-4fee-4aa6-afef-310e325448cd\") " pod="tigera-operator/tigera-operator-844669ff44-5tdsk" May 16 00:52:29.142543 kubelet[4137]: I0516 00:52:29.142544 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c80d2412-4fee-4aa6-afef-310e325448cd-var-lib-calico\") pod \"tigera-operator-844669ff44-5tdsk\" (UID: \"c80d2412-4fee-4aa6-afef-310e325448cd\") " pod="tigera-operator/tigera-operator-844669ff44-5tdsk" May 16 00:52:29.155117 containerd[2665]: time="2025-05-16T00:52:29.155092494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rf2kx,Uid:3666d22e-e8db-4b5c-a075-b33f6efc2bfa,Namespace:kube-system,Attempt:0,}" May 16 00:52:29.166944 containerd[2665]: time="2025-05-16T00:52:29.166878414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:29.166981 containerd[2665]: time="2025-05-16T00:52:29.166944214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:29.166981 containerd[2665]: time="2025-05-16T00:52:29.166956374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:29.167047 containerd[2665]: time="2025-05-16T00:52:29.167031014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:29.188875 systemd[1]: Started cri-containerd-a7d6a41a2305027b3cb45de3e74a7463766db793431692e79f112c4e178554cb.scope - libcontainer container a7d6a41a2305027b3cb45de3e74a7463766db793431692e79f112c4e178554cb. May 16 00:52:29.204641 containerd[2665]: time="2025-05-16T00:52:29.204608334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rf2kx,Uid:3666d22e-e8db-4b5c-a075-b33f6efc2bfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7d6a41a2305027b3cb45de3e74a7463766db793431692e79f112c4e178554cb\"" May 16 00:52:29.206742 containerd[2665]: time="2025-05-16T00:52:29.206707414Z" level=info msg="CreateContainer within sandbox \"a7d6a41a2305027b3cb45de3e74a7463766db793431692e79f112c4e178554cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:52:29.221943 containerd[2665]: time="2025-05-16T00:52:29.221916014Z" level=info msg="CreateContainer within sandbox \"a7d6a41a2305027b3cb45de3e74a7463766db793431692e79f112c4e178554cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"805884eb2f7cb25e7460989e7621b6c237645953eee6174b6bb7badc510117e2\"" May 16 00:52:29.222263 containerd[2665]: time="2025-05-16T00:52:29.222243134Z" level=info msg="StartContainer for \"805884eb2f7cb25e7460989e7621b6c237645953eee6174b6bb7badc510117e2\"" May 16 00:52:29.249852 systemd[1]: Started cri-containerd-805884eb2f7cb25e7460989e7621b6c237645953eee6174b6bb7badc510117e2.scope - libcontainer container 805884eb2f7cb25e7460989e7621b6c237645953eee6174b6bb7badc510117e2. May 16 00:52:29.270576 containerd[2665]: time="2025-05-16T00:52:29.270548614Z" level=info msg="StartContainer for \"805884eb2f7cb25e7460989e7621b6c237645953eee6174b6bb7badc510117e2\" returns successfully" May 16 00:52:29.434309 containerd[2665]: time="2025-05-16T00:52:29.434216534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-5tdsk,Uid:c80d2412-4fee-4aa6-afef-310e325448cd,Namespace:tigera-operator,Attempt:0,}" May 16 00:52:29.447309 containerd[2665]: time="2025-05-16T00:52:29.447254374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:29.447355 containerd[2665]: time="2025-05-16T00:52:29.447305654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:29.447355 containerd[2665]: time="2025-05-16T00:52:29.447318014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:29.447402 containerd[2665]: time="2025-05-16T00:52:29.447386734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:29.467868 systemd[1]: Started cri-containerd-2703a863dafdb337ec42e92544308e8619260163a7bab37e58400aa6226355c1.scope - libcontainer container 2703a863dafdb337ec42e92544308e8619260163a7bab37e58400aa6226355c1. May 16 00:52:29.490347 containerd[2665]: time="2025-05-16T00:52:29.490320854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-5tdsk,Uid:c80d2412-4fee-4aa6-afef-310e325448cd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2703a863dafdb337ec42e92544308e8619260163a7bab37e58400aa6226355c1\"" May 16 00:52:29.491509 containerd[2665]: time="2025-05-16T00:52:29.491491214Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 00:52:29.628451 kubelet[4137]: I0516 00:52:29.628404 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rf2kx" podStartSLOduration=1.6283886939999999 podStartE2EDuration="1.628388694s" podCreationTimestamp="2025-05-16 00:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:52:29.628267534 +0000 UTC m=+8.092014641" watchObservedRunningTime="2025-05-16 00:52:29.628388694 +0000 UTC m=+8.092135801" May 16 00:52:30.276555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507253448.mount: Deactivated successfully. May 16 00:52:31.088193 containerd[2665]: time="2025-05-16T00:52:31.088106814Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 16 00:52:31.088193 containerd[2665]: time="2025-05-16T00:52:31.088108334Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:31.088980 containerd[2665]: time="2025-05-16T00:52:31.088953894Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:31.090924 containerd[2665]: time="2025-05-16T00:52:31.090900294Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:31.091653 containerd[2665]: time="2025-05-16T00:52:31.091630014Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 1.6001098s" May 16 00:52:31.091684 containerd[2665]: time="2025-05-16T00:52:31.091658974Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 16 00:52:31.093394 containerd[2665]: time="2025-05-16T00:52:31.093373494Z" level=info msg="CreateContainer within sandbox \"2703a863dafdb337ec42e92544308e8619260163a7bab37e58400aa6226355c1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 00:52:31.098001 containerd[2665]: time="2025-05-16T00:52:31.097978414Z" level=info msg="CreateContainer within sandbox \"2703a863dafdb337ec42e92544308e8619260163a7bab37e58400aa6226355c1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8032093c6950afac24618830d0df4e7cba3f1bb1e14015b3c4c13dc672be0846\"" May 16 00:52:31.098249 containerd[2665]: time="2025-05-16T00:52:31.098233774Z" level=info msg="StartContainer for \"8032093c6950afac24618830d0df4e7cba3f1bb1e14015b3c4c13dc672be0846\"" May 16 00:52:31.126936 systemd[1]: Started cri-containerd-8032093c6950afac24618830d0df4e7cba3f1bb1e14015b3c4c13dc672be0846.scope - libcontainer container 8032093c6950afac24618830d0df4e7cba3f1bb1e14015b3c4c13dc672be0846. May 16 00:52:31.143662 containerd[2665]: time="2025-05-16T00:52:31.143632014Z" level=info msg="StartContainer for \"8032093c6950afac24618830d0df4e7cba3f1bb1e14015b3c4c13dc672be0846\" returns successfully" May 16 00:52:31.478339 update_engine[2659]: I20250516 00:52:31.478286 2659 update_attempter.cc:509] Updating boot flags... May 16 00:52:31.509755 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4610) May 16 00:52:31.537749 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4611) May 16 00:52:31.564751 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4611) May 16 00:52:31.632256 kubelet[4137]: I0516 00:52:31.632206 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-5tdsk" podStartSLOduration=1.030931174 podStartE2EDuration="2.632188934s" podCreationTimestamp="2025-05-16 00:52:29 +0000 UTC" firstStartedPulling="2025-05-16 00:52:29.491133854 +0000 UTC m=+7.954880961" lastFinishedPulling="2025-05-16 00:52:31.092391614 +0000 UTC m=+9.556138721" observedRunningTime="2025-05-16 00:52:31.632018654 +0000 UTC m=+10.095765761" watchObservedRunningTime="2025-05-16 00:52:31.632188934 +0000 UTC m=+10.095936041" May 16 00:52:36.004861 sudo[2915]: pam_unix(sudo:session): session closed for user root May 16 00:52:36.069592 sshd[2914]: Connection closed by 139.178.89.65 port 43790 May 16 00:52:36.070018 sshd-session[2912]: pam_unix(sshd:session): session closed for user core May 16 00:52:36.072921 systemd[1]: sshd@6-147.28.151.230:22-139.178.89.65:43790.service: Deactivated successfully. May 16 00:52:36.074613 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:52:36.074811 systemd[1]: session-9.scope: Consumed 8.160s CPU time, 182.3M memory peak, 0B memory swap peak. May 16 00:52:36.075537 systemd-logind[2644]: Session 9 logged out. Waiting for processes to exit. May 16 00:52:36.076349 systemd-logind[2644]: Removed session 9. May 16 00:52:40.844627 systemd[1]: Created slice kubepods-besteffort-pod41204d2f_40a3_40ed_8478_201f934821c0.slice - libcontainer container kubepods-besteffort-pod41204d2f_40a3_40ed_8478_201f934821c0.slice. May 16 00:52:40.914345 kubelet[4137]: I0516 00:52:40.914302 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcxm\" (UniqueName: \"kubernetes.io/projected/41204d2f-40a3-40ed-8478-201f934821c0-kube-api-access-qzcxm\") pod \"calico-typha-5788465cf-pn8s2\" (UID: \"41204d2f-40a3-40ed-8478-201f934821c0\") " pod="calico-system/calico-typha-5788465cf-pn8s2" May 16 00:52:40.914345 kubelet[4137]: I0516 00:52:40.914350 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41204d2f-40a3-40ed-8478-201f934821c0-typha-certs\") pod \"calico-typha-5788465cf-pn8s2\" (UID: \"41204d2f-40a3-40ed-8478-201f934821c0\") " pod="calico-system/calico-typha-5788465cf-pn8s2" May 16 00:52:40.914732 kubelet[4137]: I0516 00:52:40.914370 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41204d2f-40a3-40ed-8478-201f934821c0-tigera-ca-bundle\") pod \"calico-typha-5788465cf-pn8s2\" (UID: \"41204d2f-40a3-40ed-8478-201f934821c0\") " pod="calico-system/calico-typha-5788465cf-pn8s2" May 16 00:52:41.148693 containerd[2665]: time="2025-05-16T00:52:41.148663409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5788465cf-pn8s2,Uid:41204d2f-40a3-40ed-8478-201f934821c0,Namespace:calico-system,Attempt:0,}" May 16 00:52:41.155812 systemd[1]: Created slice kubepods-besteffort-pod0d8b709f_eea3_4156_a618_18fa30b4a081.slice - libcontainer container kubepods-besteffort-pod0d8b709f_eea3_4156_a618_18fa30b4a081.slice. May 16 00:52:41.161275 containerd[2665]: time="2025-05-16T00:52:41.161197531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:41.161275 containerd[2665]: time="2025-05-16T00:52:41.161259875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:41.161275 containerd[2665]: time="2025-05-16T00:52:41.161271033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:41.161370 containerd[2665]: time="2025-05-16T00:52:41.161350693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:41.181870 systemd[1]: Started cri-containerd-8dcb011746258ec6e2e002dcc50e5d55dd2bc979812a21c3fdb07f2728f483b3.scope - libcontainer container 8dcb011746258ec6e2e002dcc50e5d55dd2bc979812a21c3fdb07f2728f483b3. May 16 00:52:41.204696 containerd[2665]: time="2025-05-16T00:52:41.204665060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5788465cf-pn8s2,Uid:41204d2f-40a3-40ed-8478-201f934821c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dcb011746258ec6e2e002dcc50e5d55dd2bc979812a21c3fdb07f2728f483b3\"" May 16 00:52:41.205798 containerd[2665]: time="2025-05-16T00:52:41.205775741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 00:52:41.215336 kubelet[4137]: I0516 00:52:41.215305 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-cni-net-dir\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215336 kubelet[4137]: I0516 00:52:41.215335 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-lib-modules\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215476 kubelet[4137]: I0516 00:52:41.215354 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-xtables-lock\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215476 kubelet[4137]: I0516 00:52:41.215371 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-flexvol-driver-host\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215476 kubelet[4137]: I0516 00:52:41.215427 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0d8b709f-eea3-4156-a618-18fa30b4a081-node-certs\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215563 kubelet[4137]: I0516 00:52:41.215476 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8b709f-eea3-4156-a618-18fa30b4a081-tigera-ca-bundle\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215563 kubelet[4137]: I0516 00:52:41.215525 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-cni-bin-dir\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215563 kubelet[4137]: I0516 00:52:41.215542 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-policysync\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215563 kubelet[4137]: I0516 00:52:41.215558 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-var-run-calico\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215645 kubelet[4137]: I0516 00:52:41.215576 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-var-lib-calico\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215645 kubelet[4137]: I0516 00:52:41.215593 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0d8b709f-eea3-4156-a618-18fa30b4a081-cni-log-dir\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.215645 kubelet[4137]: I0516 00:52:41.215608 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f78s\" (UniqueName: \"kubernetes.io/projected/0d8b709f-eea3-4156-a618-18fa30b4a081-kube-api-access-8f78s\") pod \"calico-node-pnck2\" (UID: \"0d8b709f-eea3-4156-a618-18fa30b4a081\") " pod="calico-system/calico-node-pnck2" May 16 00:52:41.317369 kubelet[4137]: E0516 00:52:41.317344 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.317369 kubelet[4137]: W0516 00:52:41.317365 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.317487 kubelet[4137]: E0516 00:52:41.317386 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.318670 kubelet[4137]: E0516 00:52:41.318654 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.318698 kubelet[4137]: W0516 00:52:41.318671 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.318698 kubelet[4137]: E0516 00:52:41.318686 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.324604 kubelet[4137]: E0516 00:52:41.324588 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.324604 kubelet[4137]: W0516 00:52:41.324602 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.324692 kubelet[4137]: E0516 00:52:41.324617 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.383533 kubelet[4137]: E0516 00:52:41.383496 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65t8z" podUID="f82c5669-93be-4155-9022-550402fea58a" May 16 00:52:41.401429 kubelet[4137]: E0516 00:52:41.401358 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.401429 kubelet[4137]: W0516 00:52:41.401381 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.401429 kubelet[4137]: E0516 00:52:41.401398 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.401616 kubelet[4137]: E0516 00:52:41.401602 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.401669 kubelet[4137]: W0516 00:52:41.401609 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.401669 kubelet[4137]: E0516 00:52:41.401646 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.401894 kubelet[4137]: E0516 00:52:41.401883 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.401894 kubelet[4137]: W0516 00:52:41.401892 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.401939 kubelet[4137]: E0516 00:52:41.401899 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.402079 kubelet[4137]: E0516 00:52:41.402068 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.402079 kubelet[4137]: W0516 00:52:41.402075 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.402125 kubelet[4137]: E0516 00:52:41.402084 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.402298 kubelet[4137]: E0516 00:52:41.402287 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.402298 kubelet[4137]: W0516 00:52:41.402295 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.402343 kubelet[4137]: E0516 00:52:41.402303 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.402448 kubelet[4137]: E0516 00:52:41.402438 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.402448 kubelet[4137]: W0516 00:52:41.402446 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.402493 kubelet[4137]: E0516 00:52:41.402453 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.402617 kubelet[4137]: E0516 00:52:41.402606 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.402617 kubelet[4137]: W0516 00:52:41.402613 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.402657 kubelet[4137]: E0516 00:52:41.402620 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.402828 kubelet[4137]: E0516 00:52:41.402817 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.402828 kubelet[4137]: W0516 00:52:41.402825 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.402876 kubelet[4137]: E0516 00:52:41.402832 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.403013 kubelet[4137]: E0516 00:52:41.403006 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.403035 kubelet[4137]: W0516 00:52:41.403013 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.403035 kubelet[4137]: E0516 00:52:41.403022 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.403168 kubelet[4137]: E0516 00:52:41.403160 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.403191 kubelet[4137]: W0516 00:52:41.403168 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.403191 kubelet[4137]: E0516 00:52:41.403175 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.403357 kubelet[4137]: E0516 00:52:41.403351 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.403378 kubelet[4137]: W0516 00:52:41.403357 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.403378 kubelet[4137]: E0516 00:52:41.403364 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.403562 kubelet[4137]: E0516 00:52:41.403555 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.403585 kubelet[4137]: W0516 00:52:41.403563 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.403585 kubelet[4137]: E0516 00:52:41.403570 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.403771 kubelet[4137]: E0516 00:52:41.403760 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.403771 kubelet[4137]: W0516 00:52:41.403768 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.403815 kubelet[4137]: E0516 00:52:41.403776 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.403953 kubelet[4137]: E0516 00:52:41.403944 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.403953 kubelet[4137]: W0516 00:52:41.403951 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.403999 kubelet[4137]: E0516 00:52:41.403957 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.404155 kubelet[4137]: E0516 00:52:41.404148 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.404177 kubelet[4137]: W0516 00:52:41.404155 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.404177 kubelet[4137]: E0516 00:52:41.404162 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.404354 kubelet[4137]: E0516 00:52:41.404347 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.404378 kubelet[4137]: W0516 00:52:41.404354 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.404378 kubelet[4137]: E0516 00:52:41.404361 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.404512 kubelet[4137]: E0516 00:52:41.404503 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.404512 kubelet[4137]: W0516 00:52:41.404511 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.404550 kubelet[4137]: E0516 00:52:41.404518 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.404685 kubelet[4137]: E0516 00:52:41.404678 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.404710 kubelet[4137]: W0516 00:52:41.404685 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.404710 kubelet[4137]: E0516 00:52:41.404692 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.404864 kubelet[4137]: E0516 00:52:41.404856 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.404883 kubelet[4137]: W0516 00:52:41.404863 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.404883 kubelet[4137]: E0516 00:52:41.404870 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.405017 kubelet[4137]: E0516 00:52:41.405010 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.405039 kubelet[4137]: W0516 00:52:41.405017 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.405039 kubelet[4137]: E0516 00:52:41.405024 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.417406 kubelet[4137]: E0516 00:52:41.417388 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.417406 kubelet[4137]: W0516 00:52:41.417402 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.417461 kubelet[4137]: E0516 00:52:41.417417 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.417461 kubelet[4137]: I0516 00:52:41.417442 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f82c5669-93be-4155-9022-550402fea58a-varrun\") pod \"csi-node-driver-65t8z\" (UID: \"f82c5669-93be-4155-9022-550402fea58a\") " pod="calico-system/csi-node-driver-65t8z" May 16 00:52:41.417716 kubelet[4137]: E0516 00:52:41.417702 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.417716 kubelet[4137]: W0516 00:52:41.417712 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.417770 kubelet[4137]: E0516 00:52:41.417724 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.417770 kubelet[4137]: I0516 00:52:41.417737 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f82c5669-93be-4155-9022-550402fea58a-registration-dir\") pod \"csi-node-driver-65t8z\" (UID: \"f82c5669-93be-4155-9022-550402fea58a\") " pod="calico-system/csi-node-driver-65t8z" May 16 00:52:41.417897 kubelet[4137]: E0516 00:52:41.417887 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.417897 kubelet[4137]: W0516 00:52:41.417896 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.417944 kubelet[4137]: E0516 00:52:41.417907 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.417944 kubelet[4137]: I0516 00:52:41.417922 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fls9q\" (UniqueName: \"kubernetes.io/projected/f82c5669-93be-4155-9022-550402fea58a-kube-api-access-fls9q\") pod \"csi-node-driver-65t8z\" (UID: \"f82c5669-93be-4155-9022-550402fea58a\") " pod="calico-system/csi-node-driver-65t8z" May 16 00:52:41.418162 kubelet[4137]: E0516 00:52:41.418153 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.418183 kubelet[4137]: W0516 00:52:41.418162 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.418183 kubelet[4137]: E0516 00:52:41.418172 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.418221 kubelet[4137]: I0516 00:52:41.418188 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f82c5669-93be-4155-9022-550402fea58a-socket-dir\") pod \"csi-node-driver-65t8z\" (UID: \"f82c5669-93be-4155-9022-550402fea58a\") " pod="calico-system/csi-node-driver-65t8z" May 16 00:52:41.418414 kubelet[4137]: E0516 00:52:41.418404 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.418472 kubelet[4137]: W0516 00:52:41.418414 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.418472 kubelet[4137]: E0516 00:52:41.418424 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.418472 kubelet[4137]: I0516 00:52:41.418437 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f82c5669-93be-4155-9022-550402fea58a-kubelet-dir\") pod \"csi-node-driver-65t8z\" (UID: \"f82c5669-93be-4155-9022-550402fea58a\") " pod="calico-system/csi-node-driver-65t8z" May 16 00:52:41.418663 kubelet[4137]: E0516 00:52:41.418653 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.418688 kubelet[4137]: W0516 00:52:41.418663 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.418688 kubelet[4137]: E0516 00:52:41.418678 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.418882 kubelet[4137]: E0516 00:52:41.418856 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.418882 kubelet[4137]: W0516 00:52:41.418863 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.418932 kubelet[4137]: E0516 00:52:41.418887 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.419091 kubelet[4137]: E0516 00:52:41.419002 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.419091 kubelet[4137]: W0516 00:52:41.419008 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.419091 kubelet[4137]: E0516 00:52:41.419024 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.419216 kubelet[4137]: E0516 00:52:41.419204 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.419216 kubelet[4137]: W0516 00:52:41.419213 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.419262 kubelet[4137]: E0516 00:52:41.419223 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.419443 kubelet[4137]: E0516 00:52:41.419431 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.419443 kubelet[4137]: W0516 00:52:41.419439 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.419443 kubelet[4137]: E0516 00:52:41.419449 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.419649 kubelet[4137]: E0516 00:52:41.419630 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.419649 kubelet[4137]: W0516 00:52:41.419637 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.419649 kubelet[4137]: E0516 00:52:41.419647 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.419899 kubelet[4137]: E0516 00:52:41.419891 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.419899 kubelet[4137]: W0516 00:52:41.419898 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.419981 kubelet[4137]: E0516 00:52:41.419906 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.420084 kubelet[4137]: E0516 00:52:41.420074 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.420084 kubelet[4137]: W0516 00:52:41.420082 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.420084 kubelet[4137]: E0516 00:52:41.420089 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.420291 kubelet[4137]: E0516 00:52:41.420225 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.420291 kubelet[4137]: W0516 00:52:41.420231 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.420291 kubelet[4137]: E0516 00:52:41.420238 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.420443 kubelet[4137]: E0516 00:52:41.420376 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.420443 kubelet[4137]: W0516 00:52:41.420383 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.420443 kubelet[4137]: E0516 00:52:41.420391 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.458701 containerd[2665]: time="2025-05-16T00:52:41.458652633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pnck2,Uid:0d8b709f-eea3-4156-a618-18fa30b4a081,Namespace:calico-system,Attempt:0,}" May 16 00:52:41.471038 containerd[2665]: time="2025-05-16T00:52:41.470971170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:41.471038 containerd[2665]: time="2025-05-16T00:52:41.471025316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:41.471088 containerd[2665]: time="2025-05-16T00:52:41.471038073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:41.471129 containerd[2665]: time="2025-05-16T00:52:41.471109575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:41.499933 systemd[1]: Started cri-containerd-89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3.scope - libcontainer container 89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3. May 16 00:52:41.515768 containerd[2665]: time="2025-05-16T00:52:41.515735333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pnck2,Uid:0d8b709f-eea3-4156-a618-18fa30b4a081,Namespace:calico-system,Attempt:0,} returns sandbox id \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\"" May 16 00:52:41.519037 kubelet[4137]: E0516 00:52:41.519012 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.519063 kubelet[4137]: W0516 00:52:41.519036 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.519063 kubelet[4137]: E0516 00:52:41.519055 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.519259 kubelet[4137]: E0516 00:52:41.519248 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.519259 kubelet[4137]: W0516 00:52:41.519256 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.519306 kubelet[4137]: E0516 00:52:41.519268 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.519529 kubelet[4137]: E0516 00:52:41.519511 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.519553 kubelet[4137]: W0516 00:52:41.519528 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.519553 kubelet[4137]: E0516 00:52:41.519547 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.519743 kubelet[4137]: E0516 00:52:41.519732 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.519768 kubelet[4137]: W0516 00:52:41.519745 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.519768 kubelet[4137]: E0516 00:52:41.519756 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.519985 kubelet[4137]: E0516 00:52:41.519977 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.520007 kubelet[4137]: W0516 00:52:41.519984 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.520007 kubelet[4137]: E0516 00:52:41.519994 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.520261 kubelet[4137]: E0516 00:52:41.520249 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.520284 kubelet[4137]: W0516 00:52:41.520261 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.520284 kubelet[4137]: E0516 00:52:41.520273 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.520471 kubelet[4137]: E0516 00:52:41.520461 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.520492 kubelet[4137]: W0516 00:52:41.520471 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.520492 kubelet[4137]: E0516 00:52:41.520482 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.520671 kubelet[4137]: E0516 00:52:41.520663 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.520695 kubelet[4137]: W0516 00:52:41.520671 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.520695 kubelet[4137]: E0516 00:52:41.520681 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.520878 kubelet[4137]: E0516 00:52:41.520870 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.520897 kubelet[4137]: W0516 00:52:41.520877 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.520897 kubelet[4137]: E0516 00:52:41.520888 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.521121 kubelet[4137]: E0516 00:52:41.521114 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.521145 kubelet[4137]: W0516 00:52:41.521121 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.521145 kubelet[4137]: E0516 00:52:41.521132 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.521403 kubelet[4137]: E0516 00:52:41.521390 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.521425 kubelet[4137]: W0516 00:52:41.521406 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.521425 kubelet[4137]: E0516 00:52:41.521421 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.521639 kubelet[4137]: E0516 00:52:41.521630 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.521665 kubelet[4137]: W0516 00:52:41.521639 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.521665 kubelet[4137]: E0516 00:52:41.521652 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.521882 kubelet[4137]: E0516 00:52:41.521873 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.521905 kubelet[4137]: W0516 00:52:41.521881 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.521924 kubelet[4137]: E0516 00:52:41.521903 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.522036 kubelet[4137]: E0516 00:52:41.522029 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.522062 kubelet[4137]: W0516 00:52:41.522036 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.522062 kubelet[4137]: E0516 00:52:41.522053 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.522227 kubelet[4137]: E0516 00:52:41.522215 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.522227 kubelet[4137]: W0516 00:52:41.522223 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.522267 kubelet[4137]: E0516 00:52:41.522233 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.522442 kubelet[4137]: E0516 00:52:41.522431 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.522442 kubelet[4137]: W0516 00:52:41.522439 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.522487 kubelet[4137]: E0516 00:52:41.522452 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.522718 kubelet[4137]: E0516 00:52:41.522706 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.522743 kubelet[4137]: W0516 00:52:41.522717 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.522743 kubelet[4137]: E0516 00:52:41.522728 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.522904 kubelet[4137]: E0516 00:52:41.522891 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.522904 kubelet[4137]: W0516 00:52:41.522902 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.522946 kubelet[4137]: E0516 00:52:41.522914 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.523075 kubelet[4137]: E0516 00:52:41.523063 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.523097 kubelet[4137]: W0516 00:52:41.523073 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.523097 kubelet[4137]: E0516 00:52:41.523087 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.523238 kubelet[4137]: E0516 00:52:41.523229 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.523263 kubelet[4137]: W0516 00:52:41.523238 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.523263 kubelet[4137]: E0516 00:52:41.523248 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.523422 kubelet[4137]: E0516 00:52:41.523414 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.523442 kubelet[4137]: W0516 00:52:41.523422 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.523442 kubelet[4137]: E0516 00:52:41.523434 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.523611 kubelet[4137]: E0516 00:52:41.523602 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.523636 kubelet[4137]: W0516 00:52:41.523612 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.523636 kubelet[4137]: E0516 00:52:41.523623 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.523790 kubelet[4137]: E0516 00:52:41.523778 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.523790 kubelet[4137]: W0516 00:52:41.523787 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.523828 kubelet[4137]: E0516 00:52:41.523795 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.523957 kubelet[4137]: E0516 00:52:41.523946 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.523957 kubelet[4137]: W0516 00:52:41.523954 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.524001 kubelet[4137]: E0516 00:52:41.523962 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.524131 kubelet[4137]: E0516 00:52:41.524120 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.524131 kubelet[4137]: W0516 00:52:41.524129 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.524172 kubelet[4137]: E0516 00:52:41.524137 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.544344 kubelet[4137]: E0516 00:52:41.544322 4137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:52:41.544344 kubelet[4137]: W0516 00:52:41.544337 4137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:52:41.544433 kubelet[4137]: E0516 00:52:41.544351 4137 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:52:41.707265 containerd[2665]: time="2025-05-16T00:52:41.707183181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:41.707265 containerd[2665]: time="2025-05-16T00:52:41.707238527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 16 00:52:41.707856 containerd[2665]: time="2025-05-16T00:52:41.707834817Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:41.709676 containerd[2665]: time="2025-05-16T00:52:41.709650240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:41.710354 containerd[2665]: time="2025-05-16T00:52:41.710329549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 504.522576ms" May 16 00:52:41.710382 containerd[2665]: time="2025-05-16T00:52:41.710357262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 16 00:52:41.711137 containerd[2665]: time="2025-05-16T00:52:41.711114391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 00:52:41.715688 containerd[2665]: time="2025-05-16T00:52:41.715663165Z" level=info msg="CreateContainer within sandbox \"8dcb011746258ec6e2e002dcc50e5d55dd2bc979812a21c3fdb07f2728f483b3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 00:52:41.720648 containerd[2665]: time="2025-05-16T00:52:41.720616997Z" level=info msg="CreateContainer within sandbox \"8dcb011746258ec6e2e002dcc50e5d55dd2bc979812a21c3fdb07f2728f483b3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2200ab2975760bb388ea5fec013f8ba92305a69034e90eac6ce3be7ba75fe92\"" May 16 00:52:41.720941 containerd[2665]: time="2025-05-16T00:52:41.720915362Z" level=info msg="StartContainer for \"a2200ab2975760bb388ea5fec013f8ba92305a69034e90eac6ce3be7ba75fe92\"" May 16 00:52:41.741911 systemd[1]: Started cri-containerd-a2200ab2975760bb388ea5fec013f8ba92305a69034e90eac6ce3be7ba75fe92.scope - libcontainer container a2200ab2975760bb388ea5fec013f8ba92305a69034e90eac6ce3be7ba75fe92. May 16 00:52:41.767789 containerd[2665]: time="2025-05-16T00:52:41.767759440Z" level=info msg="StartContainer for \"a2200ab2975760bb388ea5fec013f8ba92305a69034e90eac6ce3be7ba75fe92\" returns successfully" May 16 00:52:41.980564 containerd[2665]: time="2025-05-16T00:52:41.980448698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:41.980564 containerd[2665]: time="2025-05-16T00:52:41.980518960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 16 00:52:41.981112 containerd[2665]: time="2025-05-16T00:52:41.981091416Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:41.984534 containerd[2665]: time="2025-05-16T00:52:41.984499237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:41.985312 containerd[2665]: time="2025-05-16T00:52:41.985278761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 274.130818ms" May 16 00:52:41.985336 containerd[2665]: time="2025-05-16T00:52:41.985315991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 16 00:52:41.986898 containerd[2665]: time="2025-05-16T00:52:41.986871599Z" level=info msg="CreateContainer within sandbox \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 00:52:41.992338 containerd[2665]: time="2025-05-16T00:52:41.992303191Z" level=info msg="CreateContainer within sandbox \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc\"" May 16 00:52:41.992670 containerd[2665]: time="2025-05-16T00:52:41.992644225Z" level=info msg="StartContainer for \"aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc\"" May 16 00:52:42.016847 systemd[1]: Started cri-containerd-aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc.scope - libcontainer container aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc. May 16 00:52:42.036503 containerd[2665]: time="2025-05-16T00:52:42.036468987Z" level=info msg="StartContainer for \"aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc\" returns successfully" May 16 00:52:42.048324 systemd[1]: cri-containerd-aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc.scope: Deactivated successfully. May 16 00:52:42.063094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc-rootfs.mount: Deactivated successfully. May 16 00:52:42.198165 containerd[2665]: time="2025-05-16T00:52:42.198118809Z" level=info msg="shim disconnected" id=aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc namespace=k8s.io May 16 00:52:42.198396 containerd[2665]: time="2025-05-16T00:52:42.198164118Z" level=warning msg="cleaning up after shim disconnected" id=aa1ced6780f4f1df206c31f7b907872c9ff4b35d75a57be969f71336c49c48cc namespace=k8s.io May 16 00:52:42.198396 containerd[2665]: time="2025-05-16T00:52:42.198172716Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:52:42.606004 kubelet[4137]: E0516 00:52:42.605967 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65t8z" podUID="f82c5669-93be-4155-9022-550402fea58a" May 16 00:52:42.642912 containerd[2665]: time="2025-05-16T00:52:42.642884044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 00:52:42.660155 kubelet[4137]: I0516 00:52:42.660110 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5788465cf-pn8s2" podStartSLOduration=2.154669631 podStartE2EDuration="2.660096259s" podCreationTimestamp="2025-05-16 00:52:40 +0000 UTC" firstStartedPulling="2025-05-16 00:52:41.205559315 +0000 UTC m=+19.669306422" lastFinishedPulling="2025-05-16 00:52:41.710985943 +0000 UTC m=+20.174733050" observedRunningTime="2025-05-16 00:52:42.660087621 +0000 UTC m=+21.123834728" watchObservedRunningTime="2025-05-16 00:52:42.660096259 +0000 UTC m=+21.123843326" May 16 00:52:43.574307 containerd[2665]: time="2025-05-16T00:52:43.574267097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:43.574666 containerd[2665]: time="2025-05-16T00:52:43.574303409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 16 00:52:43.574985 containerd[2665]: time="2025-05-16T00:52:43.574965623Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:43.576935 containerd[2665]: time="2025-05-16T00:52:43.576910392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:43.577658 containerd[2665]: time="2025-05-16T00:52:43.577638711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 934.721955ms" May 16 00:52:43.577686 containerd[2665]: time="2025-05-16T00:52:43.577664785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 16 00:52:43.579359 containerd[2665]: time="2025-05-16T00:52:43.579331056Z" level=info msg="CreateContainer within sandbox \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 00:52:43.584923 containerd[2665]: time="2025-05-16T00:52:43.584892545Z" level=info msg="CreateContainer within sandbox \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd\"" May 16 00:52:43.585267 containerd[2665]: time="2025-05-16T00:52:43.585241707Z" level=info msg="StartContainer for \"fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd\"" May 16 00:52:43.625941 systemd[1]: Started cri-containerd-fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd.scope - libcontainer container fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd. May 16 00:52:43.646020 kubelet[4137]: I0516 00:52:43.645995 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:52:43.646854 containerd[2665]: time="2025-05-16T00:52:43.646815434Z" level=info msg="StartContainer for \"fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd\" returns successfully" May 16 00:52:44.010956 systemd[1]: cri-containerd-fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd.scope: Deactivated successfully. May 16 00:52:44.107370 kubelet[4137]: I0516 00:52:44.107342 4137 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:52:44.137730 systemd[1]: Created slice kubepods-burstable-pod977000b3_b558_41f5_b536_60dbfe10c1b4.slice - libcontainer container kubepods-burstable-pod977000b3_b558_41f5_b536_60dbfe10c1b4.slice. May 16 00:52:44.141414 containerd[2665]: time="2025-05-16T00:52:44.141360360Z" level=info msg="shim disconnected" id=fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd namespace=k8s.io May 16 00:52:44.141464 containerd[2665]: time="2025-05-16T00:52:44.141414629Z" level=warning msg="cleaning up after shim disconnected" id=fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd namespace=k8s.io May 16 00:52:44.141464 containerd[2665]: time="2025-05-16T00:52:44.141423027Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:52:44.142212 systemd[1]: Created slice kubepods-besteffort-pod21480104_b865_4459_9a00_43d79bca056f.slice - libcontainer container kubepods-besteffort-pod21480104_b865_4459_9a00_43d79bca056f.slice. May 16 00:52:44.145605 systemd[1]: Created slice kubepods-besteffort-poda4ae8cdc_592f_4b69_bcaa_748076a5f292.slice - libcontainer container kubepods-besteffort-poda4ae8cdc_592f_4b69_bcaa_748076a5f292.slice. May 16 00:52:44.150117 systemd[1]: Created slice kubepods-burstable-pod73af8152_9d20_4cac_a7ca_149e696fff87.slice - libcontainer container kubepods-burstable-pod73af8152_9d20_4cac_a7ca_149e696fff87.slice. May 16 00:52:44.153490 systemd[1]: Created slice kubepods-besteffort-podd8a55ae1_9984_429b_9b49_8a7e9ec6fb11.slice - libcontainer container kubepods-besteffort-podd8a55ae1_9984_429b_9b49_8a7e9ec6fb11.slice. May 16 00:52:44.157227 systemd[1]: Created slice kubepods-besteffort-podaed452f7_5298_4a6e_bf6b_b06604585632.slice - libcontainer container kubepods-besteffort-podaed452f7_5298_4a6e_bf6b_b06604585632.slice. May 16 00:52:44.162168 systemd[1]: Created slice kubepods-besteffort-pod6a45b85a_e5a7_4f77_b3f6_35027c560e3c.slice - libcontainer container kubepods-besteffort-pod6a45b85a_e5a7_4f77_b3f6_35027c560e3c.slice. May 16 00:52:44.241705 kubelet[4137]: I0516 00:52:44.241668 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-ca-bundle\") pod \"whisker-7dfc496f9-hrhpn\" (UID: \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\") " pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.241796 kubelet[4137]: I0516 00:52:44.241709 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kb5\" (UniqueName: \"kubernetes.io/projected/21480104-b865-4459-9a00-43d79bca056f-kube-api-access-v8kb5\") pod \"calico-kube-controllers-7c6564c8c8-94d98\" (UID: \"21480104-b865-4459-9a00-43d79bca056f\") " pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:44.241796 kubelet[4137]: I0516 00:52:44.241730 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73af8152-9d20-4cac-a7ca-149e696fff87-config-volume\") pod \"coredns-668d6bf9bc-wbpwb\" (UID: \"73af8152-9d20-4cac-a7ca-149e696fff87\") " pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:44.241796 kubelet[4137]: I0516 00:52:44.241757 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4ae8cdc-592f-4b69-bcaa-748076a5f292-calico-apiserver-certs\") pod \"calico-apiserver-5cf5649dd4-7s65s\" (UID: \"a4ae8cdc-592f-4b69-bcaa-748076a5f292\") " pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:44.241796 kubelet[4137]: I0516 00:52:44.241773 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n89l\" (UniqueName: \"kubernetes.io/projected/977000b3-b558-41f5-b536-60dbfe10c1b4-kube-api-access-9n89l\") pod \"coredns-668d6bf9bc-4h2v7\" (UID: \"977000b3-b558-41f5-b536-60dbfe10c1b4\") " pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:44.241796 kubelet[4137]: I0516 00:52:44.241791 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aed452f7-5298-4a6e-bf6b-b06604585632-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-hrh6s\" (UID: \"aed452f7-5298-4a6e-bf6b-b06604585632\") " pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.241959 kubelet[4137]: I0516 00:52:44.241808 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21480104-b865-4459-9a00-43d79bca056f-tigera-ca-bundle\") pod \"calico-kube-controllers-7c6564c8c8-94d98\" (UID: \"21480104-b865-4459-9a00-43d79bca056f\") " pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:44.241959 kubelet[4137]: I0516 00:52:44.241824 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vq4q\" (UniqueName: \"kubernetes.io/projected/6a45b85a-e5a7-4f77-b3f6-35027c560e3c-kube-api-access-9vq4q\") pod \"calico-apiserver-5cf5649dd4-49jjr\" (UID: \"6a45b85a-e5a7-4f77-b3f6-35027c560e3c\") " pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:44.241959 kubelet[4137]: I0516 00:52:44.241920 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5tr\" (UniqueName: \"kubernetes.io/projected/a4ae8cdc-592f-4b69-bcaa-748076a5f292-kube-api-access-cz5tr\") pod \"calico-apiserver-5cf5649dd4-7s65s\" (UID: \"a4ae8cdc-592f-4b69-bcaa-748076a5f292\") " pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:44.241959 kubelet[4137]: I0516 00:52:44.241953 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/977000b3-b558-41f5-b536-60dbfe10c1b4-config-volume\") pod \"coredns-668d6bf9bc-4h2v7\" (UID: \"977000b3-b558-41f5-b536-60dbfe10c1b4\") " pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:44.242042 kubelet[4137]: I0516 00:52:44.241971 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2w2\" (UniqueName: \"kubernetes.io/projected/aed452f7-5298-4a6e-bf6b-b06604585632-kube-api-access-ns2w2\") pod \"goldmane-78d55f7ddc-hrh6s\" (UID: \"aed452f7-5298-4a6e-bf6b-b06604585632\") " pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.242042 kubelet[4137]: I0516 00:52:44.241996 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdqr\" (UniqueName: \"kubernetes.io/projected/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-kube-api-access-5cdqr\") pod \"whisker-7dfc496f9-hrhpn\" (UID: \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\") " pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.242042 kubelet[4137]: I0516 00:52:44.242012 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed452f7-5298-4a6e-bf6b-b06604585632-config\") pod \"goldmane-78d55f7ddc-hrh6s\" (UID: \"aed452f7-5298-4a6e-bf6b-b06604585632\") " pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.242042 kubelet[4137]: I0516 00:52:44.242029 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a45b85a-e5a7-4f77-b3f6-35027c560e3c-calico-apiserver-certs\") pod \"calico-apiserver-5cf5649dd4-49jjr\" (UID: \"6a45b85a-e5a7-4f77-b3f6-35027c560e3c\") " pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:44.242124 kubelet[4137]: I0516 00:52:44.242049 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-backend-key-pair\") pod \"whisker-7dfc496f9-hrhpn\" (UID: \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\") " pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.242124 kubelet[4137]: I0516 00:52:44.242066 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6vc\" (UniqueName: \"kubernetes.io/projected/73af8152-9d20-4cac-a7ca-149e696fff87-kube-api-access-4f6vc\") pod \"coredns-668d6bf9bc-wbpwb\" (UID: \"73af8152-9d20-4cac-a7ca-149e696fff87\") " pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:44.242124 kubelet[4137]: I0516 00:52:44.242081 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aed452f7-5298-4a6e-bf6b-b06604585632-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-hrh6s\" (UID: \"aed452f7-5298-4a6e-bf6b-b06604585632\") " pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.440681 containerd[2665]: time="2025-05-16T00:52:44.440639078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:0,}" May 16 00:52:44.444561 containerd[2665]: time="2025-05-16T00:52:44.444534109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:0,}" May 16 00:52:44.448060 containerd[2665]: time="2025-05-16T00:52:44.448018266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:0,}" May 16 00:52:44.452532 containerd[2665]: time="2025-05-16T00:52:44.452508614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:0,}" May 16 00:52:44.456016 containerd[2665]: time="2025-05-16T00:52:44.455994170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:0,}" May 16 00:52:44.461607 containerd[2665]: time="2025-05-16T00:52:44.461578051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:0,}" May 16 00:52:44.464049 containerd[2665]: time="2025-05-16T00:52:44.464025463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:0,}" May 16 00:52:44.496796 containerd[2665]: time="2025-05-16T00:52:44.496734594Z" level=error msg="Failed to destroy network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.497354 containerd[2665]: time="2025-05-16T00:52:44.497326071Z" level=error msg="Failed to destroy network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.497923 containerd[2665]: time="2025-05-16T00:52:44.497895553Z" level=error msg="encountered an error cleaning up failed sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.497984 containerd[2665]: time="2025-05-16T00:52:44.497967018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498193 kubelet[4137]: E0516 00:52:44.498161 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498245 kubelet[4137]: E0516 00:52:44.498227 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:44.498270 kubelet[4137]: E0516 00:52:44.498245 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:44.498298 containerd[2665]: time="2025-05-16T00:52:44.498226884Z" level=error msg="encountered an error cleaning up failed sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498298 containerd[2665]: time="2025-05-16T00:52:44.498285272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498337 kubelet[4137]: E0516 00:52:44.498287 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wbpwb" podUID="73af8152-9d20-4cac-a7ca-149e696fff87" May 16 00:52:44.498460 kubelet[4137]: E0516 00:52:44.498427 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498491 kubelet[4137]: E0516 00:52:44.498479 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:44.498518 kubelet[4137]: E0516 00:52:44.498496 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:44.498541 containerd[2665]: time="2025-05-16T00:52:44.498476792Z" level=error msg="Failed to destroy network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498577 kubelet[4137]: E0516 00:52:44.498532 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4h2v7" podUID="977000b3-b558-41f5-b536-60dbfe10c1b4" May 16 00:52:44.498695 containerd[2665]: time="2025-05-16T00:52:44.498662714Z" level=error msg="Failed to destroy network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.498958 containerd[2665]: time="2025-05-16T00:52:44.498933577Z" level=error msg="encountered an error cleaning up failed sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499006 containerd[2665]: time="2025-05-16T00:52:44.498975809Z" level=error msg="encountered an error cleaning up failed sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499049 containerd[2665]: time="2025-05-16T00:52:44.499033237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499093 containerd[2665]: time="2025-05-16T00:52:44.498981767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499159 kubelet[4137]: E0516 00:52:44.499134 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499198 kubelet[4137]: E0516 00:52:44.499173 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:44.499198 kubelet[4137]: E0516 00:52:44.499192 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:44.499255 containerd[2665]: time="2025-05-16T00:52:44.499164130Z" level=error msg="Failed to destroy network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499279 kubelet[4137]: E0516 00:52:44.499199 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499279 kubelet[4137]: E0516 00:52:44.499220 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" podUID="a4ae8cdc-592f-4b69-bcaa-748076a5f292" May 16 00:52:44.499279 kubelet[4137]: E0516 00:52:44.499244 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:44.499360 kubelet[4137]: E0516 00:52:44.499264 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:44.499360 kubelet[4137]: E0516 00:52:44.499293 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" podUID="21480104-b865-4459-9a00-43d79bca056f" May 16 00:52:44.499545 containerd[2665]: time="2025-05-16T00:52:44.499518496Z" level=error msg="encountered an error cleaning up failed sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499591 containerd[2665]: time="2025-05-16T00:52:44.499569125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499712 kubelet[4137]: E0516 00:52:44.499678 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.499762 kubelet[4137]: E0516 00:52:44.499725 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.499762 kubelet[4137]: E0516 00:52:44.499750 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.499819 kubelet[4137]: E0516 00:52:44.499784 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dfc496f9-hrhpn" podUID="d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" May 16 00:52:44.507389 containerd[2665]: time="2025-05-16T00:52:44.507359588Z" level=error msg="Failed to destroy network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.507656 containerd[2665]: time="2025-05-16T00:52:44.507635291Z" level=error msg="encountered an error cleaning up failed sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.507702 containerd[2665]: time="2025-05-16T00:52:44.507687200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.507793 containerd[2665]: time="2025-05-16T00:52:44.507771903Z" level=error msg="Failed to destroy network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.507858 kubelet[4137]: E0516 00:52:44.507827 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.507889 kubelet[4137]: E0516 00:52:44.507873 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.507927 kubelet[4137]: E0516 00:52:44.507890 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.507967 kubelet[4137]: E0516 00:52:44.507948 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:44.508027 containerd[2665]: time="2025-05-16T00:52:44.508007454Z" level=error msg="encountered an error cleaning up failed sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.508071 containerd[2665]: time="2025-05-16T00:52:44.508053684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.508172 kubelet[4137]: E0516 00:52:44.508147 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.508211 kubelet[4137]: E0516 00:52:44.508185 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:44.508211 kubelet[4137]: E0516 00:52:44.508199 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:44.508262 kubelet[4137]: E0516 00:52:44.508230 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" podUID="6a45b85a-e5a7-4f77-b3f6-35027c560e3c" May 16 00:52:44.595633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc045036af03e3b3a8d471fa99179d42bdc1e300d66d77aa6250c93eb782ddbd-rootfs.mount: Deactivated successfully. May 16 00:52:44.610927 systemd[1]: Created slice kubepods-besteffort-podf82c5669_93be_4155_9022_550402fea58a.slice - libcontainer container kubepods-besteffort-podf82c5669_93be_4155_9022_550402fea58a.slice. May 16 00:52:44.612576 containerd[2665]: time="2025-05-16T00:52:44.612544315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:0,}" May 16 00:52:44.647756 kubelet[4137]: I0516 00:52:44.647730 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386" May 16 00:52:44.648238 containerd[2665]: time="2025-05-16T00:52:44.648211511Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:52:44.648386 containerd[2665]: time="2025-05-16T00:52:44.648370678Z" level=info msg="Ensure that sandbox 7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386 in task-service has been cleanup successfully" May 16 00:52:44.648412 kubelet[4137]: I0516 00:52:44.648383 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0" May 16 00:52:44.648573 containerd[2665]: time="2025-05-16T00:52:44.648558119Z" level=info msg="TearDown network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" successfully" May 16 00:52:44.648612 containerd[2665]: time="2025-05-16T00:52:44.648573836Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" returns successfully" May 16 00:52:44.648766 containerd[2665]: time="2025-05-16T00:52:44.648753159Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:52:44.648896 containerd[2665]: time="2025-05-16T00:52:44.648880852Z" level=info msg="Ensure that sandbox 117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0 in task-service has been cleanup successfully" May 16 00:52:44.648998 containerd[2665]: time="2025-05-16T00:52:44.648975592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:1,}" May 16 00:52:44.649047 containerd[2665]: time="2025-05-16T00:52:44.649031381Z" level=info msg="TearDown network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" successfully" May 16 00:52:44.649047 containerd[2665]: time="2025-05-16T00:52:44.649044738Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" returns successfully" May 16 00:52:44.649119 kubelet[4137]: I0516 00:52:44.649106 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef" May 16 00:52:44.649418 containerd[2665]: time="2025-05-16T00:52:44.649400904Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:52:44.649546 containerd[2665]: time="2025-05-16T00:52:44.649533557Z" level=info msg="Ensure that sandbox 8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef in task-service has been cleanup successfully" May 16 00:52:44.649575 containerd[2665]: time="2025-05-16T00:52:44.649552633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:1,}" May 16 00:52:44.649688 containerd[2665]: time="2025-05-16T00:52:44.649674087Z" level=info msg="TearDown network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" successfully" May 16 00:52:44.649709 containerd[2665]: time="2025-05-16T00:52:44.649687485Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" returns successfully" May 16 00:52:44.650020 systemd[1]: run-netns-cni\x2d8ac008d6\x2dc8e4\x2d9fda\x2d76a8\x2d22bc697d6d5c.mount: Deactivated successfully. May 16 00:52:44.650229 containerd[2665]: time="2025-05-16T00:52:44.650052769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:1,}" May 16 00:52:44.650655 kubelet[4137]: I0516 00:52:44.650638 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f" May 16 00:52:44.651039 containerd[2665]: time="2025-05-16T00:52:44.651019768Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:52:44.651177 containerd[2665]: time="2025-05-16T00:52:44.651163218Z" level=info msg="Ensure that sandbox 8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f in task-service has been cleanup successfully" May 16 00:52:44.651543 containerd[2665]: time="2025-05-16T00:52:44.651520464Z" level=info msg="TearDown network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" successfully" May 16 00:52:44.651543 containerd[2665]: time="2025-05-16T00:52:44.651540100Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" returns successfully" May 16 00:52:44.651832 kubelet[4137]: I0516 00:52:44.651747 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d" May 16 00:52:44.651922 containerd[2665]: time="2025-05-16T00:52:44.651898186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:1,}" May 16 00:52:44.651988 systemd[1]: run-netns-cni\x2df432ed14\x2d2884\x2dec19\x2d79e5\x2dd21b7ee217d4.mount: Deactivated successfully. May 16 00:52:44.652059 systemd[1]: run-netns-cni\x2dce197afc\x2d6bde\x2d9353\x2de01e\x2d1ed415c3ca86.mount: Deactivated successfully. May 16 00:52:44.652466 containerd[2665]: time="2025-05-16T00:52:44.652123539Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:52:44.652466 containerd[2665]: time="2025-05-16T00:52:44.652264550Z" level=info msg="Ensure that sandbox 6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d in task-service has been cleanup successfully" May 16 00:52:44.652550 kubelet[4137]: I0516 00:52:44.652525 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca" May 16 00:52:44.652659 containerd[2665]: time="2025-05-16T00:52:44.652637432Z" level=info msg="TearDown network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" successfully" May 16 00:52:44.652711 containerd[2665]: time="2025-05-16T00:52:44.652699380Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" returns successfully" May 16 00:52:44.652909 containerd[2665]: time="2025-05-16T00:52:44.652887420Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:52:44.653257 containerd[2665]: time="2025-05-16T00:52:44.653037869Z" level=info msg="Ensure that sandbox 0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca in task-service has been cleanup successfully" May 16 00:52:44.653257 containerd[2665]: time="2025-05-16T00:52:44.653099376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:1,}" May 16 00:52:44.653343 containerd[2665]: time="2025-05-16T00:52:44.653277580Z" level=info msg="TearDown network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" successfully" May 16 00:52:44.653343 containerd[2665]: time="2025-05-16T00:52:44.653293736Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" returns successfully" May 16 00:52:44.653625 containerd[2665]: time="2025-05-16T00:52:44.653602752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:1,}" May 16 00:52:44.655044 kubelet[4137]: I0516 00:52:44.655029 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563" May 16 00:52:44.655329 containerd[2665]: time="2025-05-16T00:52:44.655307118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 00:52:44.655497 containerd[2665]: time="2025-05-16T00:52:44.655477643Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:52:44.655838 containerd[2665]: time="2025-05-16T00:52:44.655734390Z" level=info msg="Ensure that sandbox 1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563 in task-service has been cleanup successfully" May 16 00:52:44.656029 containerd[2665]: time="2025-05-16T00:52:44.656006413Z" level=info msg="TearDown network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" successfully" May 16 00:52:44.656054 containerd[2665]: time="2025-05-16T00:52:44.656028568Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" returns successfully" May 16 00:52:44.657693 containerd[2665]: time="2025-05-16T00:52:44.657432437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:1,}" May 16 00:52:44.658978 containerd[2665]: time="2025-05-16T00:52:44.658945883Z" level=error msg="Failed to destroy network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.659288 containerd[2665]: time="2025-05-16T00:52:44.659260018Z" level=error msg="encountered an error cleaning up failed sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.659353 containerd[2665]: time="2025-05-16T00:52:44.659314446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.659479 kubelet[4137]: E0516 00:52:44.659450 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.659578 kubelet[4137]: E0516 00:52:44.659493 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65t8z" May 16 00:52:44.659578 kubelet[4137]: E0516 00:52:44.659512 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65t8z" May 16 00:52:44.659578 kubelet[4137]: E0516 00:52:44.659548 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-65t8z_calico-system(f82c5669-93be-4155-9022-550402fea58a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-65t8z_calico-system(f82c5669-93be-4155-9022-550402fea58a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-65t8z" podUID="f82c5669-93be-4155-9022-550402fea58a" May 16 00:52:44.695783 containerd[2665]: time="2025-05-16T00:52:44.695666181Z" level=error msg="Failed to destroy network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.696539 containerd[2665]: time="2025-05-16T00:52:44.696514765Z" level=error msg="encountered an error cleaning up failed sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.696592 containerd[2665]: time="2025-05-16T00:52:44.696577072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.696823 kubelet[4137]: E0516 00:52:44.696794 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.696877 kubelet[4137]: E0516 00:52:44.696849 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.696877 kubelet[4137]: E0516 00:52:44.696873 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:44.696933 kubelet[4137]: E0516 00:52:44.696911 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dfc496f9-hrhpn" podUID="d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" May 16 00:52:44.698863 containerd[2665]: time="2025-05-16T00:52:44.698829804Z" level=error msg="Failed to destroy network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699470 containerd[2665]: time="2025-05-16T00:52:44.699389848Z" level=error msg="Failed to destroy network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699594 containerd[2665]: time="2025-05-16T00:52:44.699545975Z" level=error msg="encountered an error cleaning up failed sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699629 containerd[2665]: time="2025-05-16T00:52:44.699600484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699777 kubelet[4137]: E0516 00:52:44.699745 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699832 kubelet[4137]: E0516 00:52:44.699794 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:44.699832 kubelet[4137]: E0516 00:52:44.699814 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:44.699880 kubelet[4137]: E0516 00:52:44.699850 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" podUID="6a45b85a-e5a7-4f77-b3f6-35027c560e3c" May 16 00:52:44.699927 containerd[2665]: time="2025-05-16T00:52:44.699812240Z" level=error msg="encountered an error cleaning up failed sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699927 containerd[2665]: time="2025-05-16T00:52:44.699861470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.699985 kubelet[4137]: E0516 00:52:44.699959 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.700015 kubelet[4137]: E0516 00:52:44.700000 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:44.700040 kubelet[4137]: E0516 00:52:44.700015 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:44.700066 kubelet[4137]: E0516 00:52:44.700048 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wbpwb" podUID="73af8152-9d20-4cac-a7ca-149e696fff87" May 16 00:52:44.701644 containerd[2665]: time="2025-05-16T00:52:44.701617865Z" level=error msg="Failed to destroy network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.701922 containerd[2665]: time="2025-05-16T00:52:44.701903366Z" level=error msg="encountered an error cleaning up failed sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.701962 containerd[2665]: time="2025-05-16T00:52:44.701946317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702090 kubelet[4137]: E0516 00:52:44.702065 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702116 kubelet[4137]: E0516 00:52:44.702106 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:44.702141 containerd[2665]: time="2025-05-16T00:52:44.702089647Z" level=error msg="Failed to destroy network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702165 kubelet[4137]: E0516 00:52:44.702122 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:44.702186 kubelet[4137]: E0516 00:52:44.702153 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" podUID="21480104-b865-4459-9a00-43d79bca056f" May 16 00:52:44.702389 containerd[2665]: time="2025-05-16T00:52:44.702371429Z" level=error msg="encountered an error cleaning up failed sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702423 containerd[2665]: time="2025-05-16T00:52:44.702409621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702540 kubelet[4137]: E0516 00:52:44.702510 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702568 kubelet[4137]: E0516 00:52:44.702555 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.702589 kubelet[4137]: E0516 00:52:44.702573 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:44.702620 kubelet[4137]: E0516 00:52:44.702603 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:44.702730 containerd[2665]: time="2025-05-16T00:52:44.702713838Z" level=error msg="Failed to destroy network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.702988 containerd[2665]: time="2025-05-16T00:52:44.702970145Z" level=error msg="encountered an error cleaning up failed sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.703019 containerd[2665]: time="2025-05-16T00:52:44.703007097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.703109 kubelet[4137]: E0516 00:52:44.703090 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.703138 kubelet[4137]: E0516 00:52:44.703126 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:44.703164 kubelet[4137]: E0516 00:52:44.703142 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:44.703185 kubelet[4137]: E0516 00:52:44.703171 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4h2v7" podUID="977000b3-b558-41f5-b536-60dbfe10c1b4" May 16 00:52:44.704875 containerd[2665]: time="2025-05-16T00:52:44.704843476Z" level=error msg="Failed to destroy network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.705163 containerd[2665]: time="2025-05-16T00:52:44.705142814Z" level=error msg="encountered an error cleaning up failed sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.705205 containerd[2665]: time="2025-05-16T00:52:44.705188964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.705327 kubelet[4137]: E0516 00:52:44.705309 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:44.705349 kubelet[4137]: E0516 00:52:44.705340 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:44.705372 kubelet[4137]: E0516 00:52:44.705354 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:44.705398 kubelet[4137]: E0516 00:52:44.705382 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" podUID="a4ae8cdc-592f-4b69-bcaa-748076a5f292" May 16 00:52:45.587834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c-shm.mount: Deactivated successfully. May 16 00:52:45.587927 systemd[1]: run-netns-cni\x2d88ee959e\x2d0c19\x2d2e1c\x2d7cf3\x2da23d45c7728f.mount: Deactivated successfully. May 16 00:52:45.587971 systemd[1]: run-netns-cni\x2dd8a5b9e6\x2d92c5\x2dd84f\x2d4c45\x2dd18a701484c4.mount: Deactivated successfully. May 16 00:52:45.588013 systemd[1]: run-netns-cni\x2de84683c7\x2d443b\x2da6f7\x2d661b\x2d2751d9c3966e.mount: Deactivated successfully. May 16 00:52:45.588065 systemd[1]: run-netns-cni\x2dbb8e80a3\x2d71b0\x2d2bed\x2d0c8a\x2d324bdcf8c71c.mount: Deactivated successfully. May 16 00:52:45.657361 kubelet[4137]: I0516 00:52:45.657332 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab" May 16 00:52:45.657790 containerd[2665]: time="2025-05-16T00:52:45.657759227Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" May 16 00:52:45.657953 containerd[2665]: time="2025-05-16T00:52:45.657923155Z" level=info msg="Ensure that sandbox 3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab in task-service has been cleanup successfully" May 16 00:52:45.658112 containerd[2665]: time="2025-05-16T00:52:45.658098241Z" level=info msg="TearDown network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" successfully" May 16 00:52:45.658150 containerd[2665]: time="2025-05-16T00:52:45.658111919Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" returns successfully" May 16 00:52:45.658296 kubelet[4137]: I0516 00:52:45.658277 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629" May 16 00:52:45.658406 containerd[2665]: time="2025-05-16T00:52:45.658384106Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:52:45.658484 containerd[2665]: time="2025-05-16T00:52:45.658471609Z" level=info msg="TearDown network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" successfully" May 16 00:52:45.658507 containerd[2665]: time="2025-05-16T00:52:45.658485046Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" returns successfully" May 16 00:52:45.658666 containerd[2665]: time="2025-05-16T00:52:45.658643055Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" May 16 00:52:45.658816 containerd[2665]: time="2025-05-16T00:52:45.658802304Z" level=info msg="Ensure that sandbox 38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629 in task-service has been cleanup successfully" May 16 00:52:45.658892 containerd[2665]: time="2025-05-16T00:52:45.658807263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:2,}" May 16 00:52:45.658980 containerd[2665]: time="2025-05-16T00:52:45.658965193Z" level=info msg="TearDown network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" successfully" May 16 00:52:45.659008 containerd[2665]: time="2025-05-16T00:52:45.658980710Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" returns successfully" May 16 00:52:45.659195 containerd[2665]: time="2025-05-16T00:52:45.659178391Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:52:45.659256 containerd[2665]: time="2025-05-16T00:52:45.659246218Z" level=info msg="TearDown network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" successfully" May 16 00:52:45.659281 containerd[2665]: time="2025-05-16T00:52:45.659257096Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" returns successfully" May 16 00:52:45.659357 kubelet[4137]: I0516 00:52:45.659340 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191" May 16 00:52:45.659564 containerd[2665]: time="2025-05-16T00:52:45.659542960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:2,}" May 16 00:52:45.659730 containerd[2665]: time="2025-05-16T00:52:45.659710688Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" May 16 00:52:45.659823 systemd[1]: run-netns-cni\x2dbdefe179\x2d161a\x2dfcda\x2d7c16\x2d6b3e57375492.mount: Deactivated successfully. May 16 00:52:45.659972 containerd[2665]: time="2025-05-16T00:52:45.659856899Z" level=info msg="Ensure that sandbox 31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191 in task-service has been cleanup successfully" May 16 00:52:45.660045 containerd[2665]: time="2025-05-16T00:52:45.660030385Z" level=info msg="TearDown network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" successfully" May 16 00:52:45.660068 containerd[2665]: time="2025-05-16T00:52:45.660045622Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" returns successfully" May 16 00:52:45.660254 containerd[2665]: time="2025-05-16T00:52:45.660239825Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:52:45.660275 kubelet[4137]: I0516 00:52:45.660258 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27" May 16 00:52:45.660320 containerd[2665]: time="2025-05-16T00:52:45.660309331Z" level=info msg="TearDown network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" successfully" May 16 00:52:45.660343 containerd[2665]: time="2025-05-16T00:52:45.660321049Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" returns successfully" May 16 00:52:45.660629 containerd[2665]: time="2025-05-16T00:52:45.660613592Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" May 16 00:52:45.660715 containerd[2665]: time="2025-05-16T00:52:45.660694136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:2,}" May 16 00:52:45.660763 containerd[2665]: time="2025-05-16T00:52:45.660749685Z" level=info msg="Ensure that sandbox 3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27 in task-service has been cleanup successfully" May 16 00:52:45.660926 containerd[2665]: time="2025-05-16T00:52:45.660913094Z" level=info msg="TearDown network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" successfully" May 16 00:52:45.660951 containerd[2665]: time="2025-05-16T00:52:45.660927211Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" returns successfully" May 16 00:52:45.661140 containerd[2665]: time="2025-05-16T00:52:45.661120933Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:52:45.661180 kubelet[4137]: I0516 00:52:45.661169 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1" May 16 00:52:45.661204 containerd[2665]: time="2025-05-16T00:52:45.661196118Z" level=info msg="TearDown network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" successfully" May 16 00:52:45.661223 containerd[2665]: time="2025-05-16T00:52:45.661205637Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" returns successfully" May 16 00:52:45.661556 containerd[2665]: time="2025-05-16T00:52:45.661539572Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" May 16 00:52:45.661576 containerd[2665]: time="2025-05-16T00:52:45.661556408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:2,}" May 16 00:52:45.661675 containerd[2665]: time="2025-05-16T00:52:45.661662788Z" level=info msg="Ensure that sandbox 5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1 in task-service has been cleanup successfully" May 16 00:52:45.661823 containerd[2665]: time="2025-05-16T00:52:45.661808639Z" level=info msg="TearDown network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" successfully" May 16 00:52:45.661842 containerd[2665]: time="2025-05-16T00:52:45.661823036Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" returns successfully" May 16 00:52:45.661918 systemd[1]: run-netns-cni\x2d23a39267\x2ded6b\x2d746b\x2d43b6\x2df5fc1b7622a1.mount: Deactivated successfully. May 16 00:52:45.661991 systemd[1]: run-netns-cni\x2da8858836\x2db836\x2d8ccc\x2da262\x2d78d382758815.mount: Deactivated successfully. May 16 00:52:45.662061 containerd[2665]: time="2025-05-16T00:52:45.662039394Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:52:45.662129 containerd[2665]: time="2025-05-16T00:52:45.662117619Z" level=info msg="TearDown network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" successfully" May 16 00:52:45.662149 containerd[2665]: time="2025-05-16T00:52:45.662129617Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" returns successfully" May 16 00:52:45.662205 kubelet[4137]: I0516 00:52:45.662192 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e" May 16 00:52:45.662537 containerd[2665]: time="2025-05-16T00:52:45.662522500Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" May 16 00:52:45.662567 containerd[2665]: time="2025-05-16T00:52:45.662553334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:2,}" May 16 00:52:45.662658 containerd[2665]: time="2025-05-16T00:52:45.662646276Z" level=info msg="Ensure that sandbox e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e in task-service has been cleanup successfully" May 16 00:52:45.662849 containerd[2665]: time="2025-05-16T00:52:45.662832600Z" level=info msg="TearDown network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" successfully" May 16 00:52:45.662876 containerd[2665]: time="2025-05-16T00:52:45.662850317Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" returns successfully" May 16 00:52:45.662971 kubelet[4137]: I0516 00:52:45.662954 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c" May 16 00:52:45.663099 containerd[2665]: time="2025-05-16T00:52:45.663079392Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:52:45.663168 containerd[2665]: time="2025-05-16T00:52:45.663158057Z" level=info msg="TearDown network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" successfully" May 16 00:52:45.663188 containerd[2665]: time="2025-05-16T00:52:45.663168615Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" returns successfully" May 16 00:52:45.663444 containerd[2665]: time="2025-05-16T00:52:45.663429484Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" May 16 00:52:45.663502 containerd[2665]: time="2025-05-16T00:52:45.663484233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:2,}" May 16 00:52:45.663569 containerd[2665]: time="2025-05-16T00:52:45.663555859Z" level=info msg="Ensure that sandbox 1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c in task-service has been cleanup successfully" May 16 00:52:45.663723 containerd[2665]: time="2025-05-16T00:52:45.663708110Z" level=info msg="TearDown network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" successfully" May 16 00:52:45.663746 containerd[2665]: time="2025-05-16T00:52:45.663724067Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" returns successfully" May 16 00:52:45.663936 kubelet[4137]: I0516 00:52:45.663924 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092" May 16 00:52:45.664051 containerd[2665]: time="2025-05-16T00:52:45.664033446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:1,}" May 16 00:52:45.664330 containerd[2665]: time="2025-05-16T00:52:45.664311432Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" May 16 00:52:45.664469 containerd[2665]: time="2025-05-16T00:52:45.664457444Z" level=info msg="Ensure that sandbox c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092 in task-service has been cleanup successfully" May 16 00:52:45.664608 containerd[2665]: time="2025-05-16T00:52:45.664596097Z" level=info msg="TearDown network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" successfully" May 16 00:52:45.664628 containerd[2665]: time="2025-05-16T00:52:45.664608654Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" returns successfully" May 16 00:52:45.664849 containerd[2665]: time="2025-05-16T00:52:45.664831251Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:52:45.664918 containerd[2665]: time="2025-05-16T00:52:45.664907796Z" level=info msg="TearDown network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" successfully" May 16 00:52:45.664946 containerd[2665]: time="2025-05-16T00:52:45.664918834Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" returns successfully" May 16 00:52:45.664929 systemd[1]: run-netns-cni\x2dfe07372e\x2d5c8f\x2dfeb9\x2d682c\x2ded233fc5a7c2.mount: Deactivated successfully. May 16 00:52:45.664996 systemd[1]: run-netns-cni\x2deb9fb334\x2d3c5a\x2d3471\x2d2c55\x2dc8469e045226.mount: Deactivated successfully. May 16 00:52:45.665040 systemd[1]: run-netns-cni\x2dbea4a528\x2daa83\x2d81ef\x2dedcf\x2d934ae03a8120.mount: Deactivated successfully. May 16 00:52:45.665265 containerd[2665]: time="2025-05-16T00:52:45.665245730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:2,}" May 16 00:52:45.667621 systemd[1]: run-netns-cni\x2d37c037d9\x2d045a\x2d169c\x2ddfb9\x2d580155e1f5b6.mount: Deactivated successfully. May 16 00:52:45.667692 systemd[1]: run-netns-cni\x2d45c9b912\x2d7297\x2d5f2a\x2d6649\x2d43c007cb3056.mount: Deactivated successfully. May 16 00:52:45.708031 containerd[2665]: time="2025-05-16T00:52:45.707764136Z" level=error msg="Failed to destroy network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.708440 containerd[2665]: time="2025-05-16T00:52:45.708415610Z" level=error msg="encountered an error cleaning up failed sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.708498 containerd[2665]: time="2025-05-16T00:52:45.708473238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.708738 kubelet[4137]: E0516 00:52:45.708696 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.708801 kubelet[4137]: E0516 00:52:45.708787 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:45.708856 kubelet[4137]: E0516 00:52:45.708809 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:45.708911 kubelet[4137]: E0516 00:52:45.708891 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" podUID="a4ae8cdc-592f-4b69-bcaa-748076a5f292" May 16 00:52:45.709473 containerd[2665]: time="2025-05-16T00:52:45.709437211Z" level=error msg="Failed to destroy network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.709792 containerd[2665]: time="2025-05-16T00:52:45.709771066Z" level=error msg="encountered an error cleaning up failed sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.709833 containerd[2665]: time="2025-05-16T00:52:45.709817857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.709977 kubelet[4137]: E0516 00:52:45.709949 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.710061 kubelet[4137]: E0516 00:52:45.709995 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:45.710061 kubelet[4137]: E0516 00:52:45.710012 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:45.710061 kubelet[4137]: E0516 00:52:45.710045 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:45.711325 containerd[2665]: time="2025-05-16T00:52:45.711294849Z" level=error msg="Failed to destroy network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.711607 containerd[2665]: time="2025-05-16T00:52:45.711587112Z" level=error msg="encountered an error cleaning up failed sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.711647 containerd[2665]: time="2025-05-16T00:52:45.711632184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.711750 kubelet[4137]: E0516 00:52:45.711730 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.711773 kubelet[4137]: E0516 00:52:45.711759 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:45.711798 kubelet[4137]: E0516 00:52:45.711773 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:45.711822 kubelet[4137]: E0516 00:52:45.711797 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wbpwb" podUID="73af8152-9d20-4cac-a7ca-149e696fff87" May 16 00:52:45.712681 containerd[2665]: time="2025-05-16T00:52:45.712646666Z" level=error msg="Failed to destroy network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.712987 containerd[2665]: time="2025-05-16T00:52:45.712966684Z" level=error msg="encountered an error cleaning up failed sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.713028 containerd[2665]: time="2025-05-16T00:52:45.713012915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.713207 kubelet[4137]: E0516 00:52:45.713185 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.713238 kubelet[4137]: E0516 00:52:45.713222 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:45.713259 kubelet[4137]: E0516 00:52:45.713239 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:45.713290 kubelet[4137]: E0516 00:52:45.713269 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" podUID="6a45b85a-e5a7-4f77-b3f6-35027c560e3c" May 16 00:52:45.716005 containerd[2665]: time="2025-05-16T00:52:45.715947504Z" level=error msg="Failed to destroy network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716162 containerd[2665]: time="2025-05-16T00:52:45.716126709Z" level=error msg="Failed to destroy network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716351 containerd[2665]: time="2025-05-16T00:52:45.716293797Z" level=error msg="encountered an error cleaning up failed sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716411 containerd[2665]: time="2025-05-16T00:52:45.716393817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716476 containerd[2665]: time="2025-05-16T00:52:45.716453885Z" level=error msg="encountered an error cleaning up failed sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716524 containerd[2665]: time="2025-05-16T00:52:45.716507235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716549 kubelet[4137]: E0516 00:52:45.716521 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716582 kubelet[4137]: E0516 00:52:45.716564 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65t8z" May 16 00:52:45.716604 kubelet[4137]: E0516 00:52:45.716585 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65t8z" May 16 00:52:45.716631 kubelet[4137]: E0516 00:52:45.716615 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-65t8z_calico-system(f82c5669-93be-4155-9022-550402fea58a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-65t8z_calico-system(f82c5669-93be-4155-9022-550402fea58a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-65t8z" podUID="f82c5669-93be-4155-9022-550402fea58a" May 16 00:52:45.716680 kubelet[4137]: E0516 00:52:45.716638 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.716680 kubelet[4137]: E0516 00:52:45.716670 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:45.716720 kubelet[4137]: E0516 00:52:45.716683 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:45.716744 kubelet[4137]: E0516 00:52:45.716715 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4h2v7" podUID="977000b3-b558-41f5-b536-60dbfe10c1b4" May 16 00:52:45.716849 containerd[2665]: time="2025-05-16T00:52:45.716829532Z" level=error msg="Failed to destroy network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.717115 containerd[2665]: time="2025-05-16T00:52:45.717096120Z" level=error msg="encountered an error cleaning up failed sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.717154 containerd[2665]: time="2025-05-16T00:52:45.717140192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.717274 kubelet[4137]: E0516 00:52:45.717251 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.717311 kubelet[4137]: E0516 00:52:45.717293 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:45.717336 kubelet[4137]: E0516 00:52:45.717319 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:45.717387 kubelet[4137]: E0516 00:52:45.717363 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dfc496f9-hrhpn" podUID="d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" May 16 00:52:45.729387 containerd[2665]: time="2025-05-16T00:52:45.729347576Z" level=error msg="Failed to destroy network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.729678 containerd[2665]: time="2025-05-16T00:52:45.729656756Z" level=error msg="encountered an error cleaning up failed sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.729725 containerd[2665]: time="2025-05-16T00:52:45.729710666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.729902 kubelet[4137]: E0516 00:52:45.729876 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:45.729936 kubelet[4137]: E0516 00:52:45.729924 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:45.729957 kubelet[4137]: E0516 00:52:45.729942 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:45.729996 kubelet[4137]: E0516 00:52:45.729978 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" podUID="21480104-b865-4459-9a00-43d79bca056f" May 16 00:52:46.516043 containerd[2665]: time="2025-05-16T00:52:46.515977767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 16 00:52:46.516043 containerd[2665]: time="2025-05-16T00:52:46.516044355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:46.516674 containerd[2665]: time="2025-05-16T00:52:46.516652004Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:46.518369 containerd[2665]: time="2025-05-16T00:52:46.518345495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:46.519005 containerd[2665]: time="2025-05-16T00:52:46.518975180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 1.863637868s" May 16 00:52:46.519046 containerd[2665]: time="2025-05-16T00:52:46.519009854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 16 00:52:46.524485 containerd[2665]: time="2025-05-16T00:52:46.524461579Z" level=info msg="CreateContainer within sandbox \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 00:52:46.534646 containerd[2665]: time="2025-05-16T00:52:46.534610568Z" level=info msg="CreateContainer within sandbox \"89a91a46369b6c2427aa30485f3fd53fa076dd843297df640607b4964e4152b3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"07cf033963ce69db4efae41620a8ca0fe950758564f2aa14b1cf6b973be8db6f\"" May 16 00:52:46.535052 containerd[2665]: time="2025-05-16T00:52:46.535026812Z" level=info msg="StartContainer for \"07cf033963ce69db4efae41620a8ca0fe950758564f2aa14b1cf6b973be8db6f\"" May 16 00:52:46.561854 systemd[1]: Started cri-containerd-07cf033963ce69db4efae41620a8ca0fe950758564f2aa14b1cf6b973be8db6f.scope - libcontainer container 07cf033963ce69db4efae41620a8ca0fe950758564f2aa14b1cf6b973be8db6f. May 16 00:52:46.583693 containerd[2665]: time="2025-05-16T00:52:46.583662739Z" level=info msg="StartContainer for \"07cf033963ce69db4efae41620a8ca0fe950758564f2aa14b1cf6b973be8db6f\" returns successfully" May 16 00:52:46.589271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1-shm.mount: Deactivated successfully. May 16 00:52:46.589348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002105973.mount: Deactivated successfully. May 16 00:52:46.666115 kubelet[4137]: I0516 00:52:46.666093 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4" May 16 00:52:46.666501 containerd[2665]: time="2025-05-16T00:52:46.666481350Z" level=info msg="StopPodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\"" May 16 00:52:46.666647 containerd[2665]: time="2025-05-16T00:52:46.666632482Z" level=info msg="Ensure that sandbox fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4 in task-service has been cleanup successfully" May 16 00:52:46.666819 containerd[2665]: time="2025-05-16T00:52:46.666805691Z" level=info msg="TearDown network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" successfully" May 16 00:52:46.666842 containerd[2665]: time="2025-05-16T00:52:46.666819248Z" level=info msg="StopPodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" returns successfully" May 16 00:52:46.667051 containerd[2665]: time="2025-05-16T00:52:46.667029770Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" May 16 00:52:46.667131 containerd[2665]: time="2025-05-16T00:52:46.667118314Z" level=info msg="TearDown network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" successfully" May 16 00:52:46.667152 containerd[2665]: time="2025-05-16T00:52:46.667131791Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" returns successfully" May 16 00:52:46.667281 kubelet[4137]: I0516 00:52:46.667269 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1" May 16 00:52:46.667338 containerd[2665]: time="2025-05-16T00:52:46.667321477Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:52:46.667409 containerd[2665]: time="2025-05-16T00:52:46.667397543Z" level=info msg="TearDown network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" successfully" May 16 00:52:46.667428 containerd[2665]: time="2025-05-16T00:52:46.667409381Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" returns successfully" May 16 00:52:46.667599 containerd[2665]: time="2025-05-16T00:52:46.667581989Z" level=info msg="StopPodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\"" May 16 00:52:46.667725 containerd[2665]: time="2025-05-16T00:52:46.667712565Z" level=info msg="Ensure that sandbox 3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1 in task-service has been cleanup successfully" May 16 00:52:46.667824 containerd[2665]: time="2025-05-16T00:52:46.667804389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:3,}" May 16 00:52:46.667908 containerd[2665]: time="2025-05-16T00:52:46.667892253Z" level=info msg="TearDown network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" successfully" May 16 00:52:46.667928 containerd[2665]: time="2025-05-16T00:52:46.667908810Z" level=info msg="StopPodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" returns successfully" May 16 00:52:46.668127 containerd[2665]: time="2025-05-16T00:52:46.668113292Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" May 16 00:52:46.668191 containerd[2665]: time="2025-05-16T00:52:46.668181120Z" level=info msg="TearDown network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" successfully" May 16 00:52:46.668212 containerd[2665]: time="2025-05-16T00:52:46.668191438Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" returns successfully" May 16 00:52:46.668417 containerd[2665]: time="2025-05-16T00:52:46.668403519Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:52:46.668486 containerd[2665]: time="2025-05-16T00:52:46.668475786Z" level=info msg="TearDown network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" successfully" May 16 00:52:46.668506 containerd[2665]: time="2025-05-16T00:52:46.668486344Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" returns successfully" May 16 00:52:46.668548 systemd[1]: run-netns-cni\x2d92036a96\x2dd7b6\x2dd921\x2d27dc\x2d19cea40cabd2.mount: Deactivated successfully. May 16 00:52:46.668879 containerd[2665]: time="2025-05-16T00:52:46.668856997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:3,}" May 16 00:52:46.670293 kubelet[4137]: I0516 00:52:46.670278 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9" May 16 00:52:46.670626 systemd[1]: run-netns-cni\x2d77e22e91\x2d903c\x2d5aae\x2dac2c\x2d0cb9bed90f74.mount: Deactivated successfully. May 16 00:52:46.670877 containerd[2665]: time="2025-05-16T00:52:46.670857632Z" level=info msg="StopPodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\"" May 16 00:52:46.671008 containerd[2665]: time="2025-05-16T00:52:46.670996366Z" level=info msg="Ensure that sandbox 5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9 in task-service has been cleanup successfully" May 16 00:52:46.671217 containerd[2665]: time="2025-05-16T00:52:46.671180933Z" level=info msg="TearDown network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" successfully" May 16 00:52:46.671390 containerd[2665]: time="2025-05-16T00:52:46.671375217Z" level=info msg="StopPodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" returns successfully" May 16 00:52:46.671600 containerd[2665]: time="2025-05-16T00:52:46.671582579Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" May 16 00:52:46.671666 containerd[2665]: time="2025-05-16T00:52:46.671656406Z" level=info msg="TearDown network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" successfully" May 16 00:52:46.671686 containerd[2665]: time="2025-05-16T00:52:46.671666884Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" returns successfully" May 16 00:52:46.671888 containerd[2665]: time="2025-05-16T00:52:46.671873886Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:52:46.671916 kubelet[4137]: I0516 00:52:46.671885 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f" May 16 00:52:46.671950 containerd[2665]: time="2025-05-16T00:52:46.671940034Z" level=info msg="TearDown network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" successfully" May 16 00:52:46.671982 containerd[2665]: time="2025-05-16T00:52:46.671950352Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" returns successfully" May 16 00:52:46.672323 containerd[2665]: time="2025-05-16T00:52:46.672305647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:3,}" May 16 00:52:46.672576 containerd[2665]: time="2025-05-16T00:52:46.672558761Z" level=info msg="StopPodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\"" May 16 00:52:46.672813 containerd[2665]: time="2025-05-16T00:52:46.672787520Z" level=info msg="Ensure that sandbox 8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f in task-service has been cleanup successfully" May 16 00:52:46.673016 containerd[2665]: time="2025-05-16T00:52:46.673000401Z" level=info msg="TearDown network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" successfully" May 16 00:52:46.673036 containerd[2665]: time="2025-05-16T00:52:46.673016158Z" level=info msg="StopPodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" returns successfully" May 16 00:52:46.673295 containerd[2665]: time="2025-05-16T00:52:46.673278990Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" May 16 00:52:46.673353 containerd[2665]: time="2025-05-16T00:52:46.673342418Z" level=info msg="TearDown network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" successfully" May 16 00:52:46.673374 containerd[2665]: time="2025-05-16T00:52:46.673353056Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" returns successfully" May 16 00:52:46.673531 containerd[2665]: time="2025-05-16T00:52:46.673516747Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:52:46.673538 systemd[1]: run-netns-cni\x2d2ba32eb1\x2d6d74\x2dea6e\x2d833c\x2dd31b0d44636b.mount: Deactivated successfully. May 16 00:52:46.673612 kubelet[4137]: I0516 00:52:46.673531 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5" May 16 00:52:46.673642 containerd[2665]: time="2025-05-16T00:52:46.673582215Z" level=info msg="TearDown network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" successfully" May 16 00:52:46.673642 containerd[2665]: time="2025-05-16T00:52:46.673591813Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" returns successfully" May 16 00:52:46.673921 containerd[2665]: time="2025-05-16T00:52:46.673906555Z" level=info msg="StopPodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\"" May 16 00:52:46.673952 containerd[2665]: time="2025-05-16T00:52:46.673921873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:3,}" May 16 00:52:46.674040 containerd[2665]: time="2025-05-16T00:52:46.674028333Z" level=info msg="Ensure that sandbox 5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5 in task-service has been cleanup successfully" May 16 00:52:46.674205 containerd[2665]: time="2025-05-16T00:52:46.674191783Z" level=info msg="TearDown network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" successfully" May 16 00:52:46.674226 containerd[2665]: time="2025-05-16T00:52:46.674205461Z" level=info msg="StopPodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" returns successfully" May 16 00:52:46.674518 containerd[2665]: time="2025-05-16T00:52:46.674500767Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" May 16 00:52:46.674573 containerd[2665]: time="2025-05-16T00:52:46.674565515Z" level=info msg="TearDown network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" successfully" May 16 00:52:46.674599 containerd[2665]: time="2025-05-16T00:52:46.674575513Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" returns successfully" May 16 00:52:46.674822 containerd[2665]: time="2025-05-16T00:52:46.674802632Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:52:46.674892 containerd[2665]: time="2025-05-16T00:52:46.674880818Z" level=info msg="TearDown network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" successfully" May 16 00:52:46.674913 containerd[2665]: time="2025-05-16T00:52:46.674892256Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" returns successfully" May 16 00:52:46.675238 kubelet[4137]: I0516 00:52:46.675222 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb" May 16 00:52:46.675274 containerd[2665]: time="2025-05-16T00:52:46.675245191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:3,}" May 16 00:52:46.675585 containerd[2665]: time="2025-05-16T00:52:46.675565653Z" level=info msg="StopPodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\"" May 16 00:52:46.675709 containerd[2665]: time="2025-05-16T00:52:46.675696909Z" level=info msg="Ensure that sandbox 34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb in task-service has been cleanup successfully" May 16 00:52:46.675873 containerd[2665]: time="2025-05-16T00:52:46.675851481Z" level=info msg="TearDown network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" successfully" May 16 00:52:46.675894 containerd[2665]: time="2025-05-16T00:52:46.675873956Z" level=info msg="StopPodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" returns successfully" May 16 00:52:46.676149 containerd[2665]: time="2025-05-16T00:52:46.676136789Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" May 16 00:52:46.676211 containerd[2665]: time="2025-05-16T00:52:46.676201537Z" level=info msg="TearDown network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" successfully" May 16 00:52:46.676242 containerd[2665]: time="2025-05-16T00:52:46.676211815Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" returns successfully" May 16 00:52:46.676417 containerd[2665]: time="2025-05-16T00:52:46.676398221Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:52:46.676494 containerd[2665]: time="2025-05-16T00:52:46.676483765Z" level=info msg="TearDown network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" successfully" May 16 00:52:46.676515 containerd[2665]: time="2025-05-16T00:52:46.676494803Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" returns successfully" May 16 00:52:46.676708 kubelet[4137]: I0516 00:52:46.676693 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1" May 16 00:52:46.676910 containerd[2665]: time="2025-05-16T00:52:46.676888211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:3,}" May 16 00:52:46.677036 systemd[1]: run-netns-cni\x2d101ca016\x2d75be\x2db5c4\x2df58f\x2d0ed0298af486.mount: Deactivated successfully. May 16 00:52:46.677107 systemd[1]: run-netns-cni\x2dd62d2b0f\x2dc0ba\x2daeac\x2db040\x2d7601044cb351.mount: Deactivated successfully. May 16 00:52:46.677136 containerd[2665]: time="2025-05-16T00:52:46.677111491Z" level=info msg="StopPodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\"" May 16 00:52:46.677263 containerd[2665]: time="2025-05-16T00:52:46.677250145Z" level=info msg="Ensure that sandbox 999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1 in task-service has been cleanup successfully" May 16 00:52:46.678290 containerd[2665]: time="2025-05-16T00:52:46.678256162Z" level=info msg="TearDown network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" successfully" May 16 00:52:46.678318 containerd[2665]: time="2025-05-16T00:52:46.678292155Z" level=info msg="StopPodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" returns successfully" May 16 00:52:46.679097 containerd[2665]: time="2025-05-16T00:52:46.679071493Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" May 16 00:52:46.679201 containerd[2665]: time="2025-05-16T00:52:46.679187792Z" level=info msg="TearDown network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" successfully" May 16 00:52:46.679277 kubelet[4137]: I0516 00:52:46.679258 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f" May 16 00:52:46.680034 containerd[2665]: time="2025-05-16T00:52:46.680017121Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" returns successfully" May 16 00:52:46.680179 containerd[2665]: time="2025-05-16T00:52:46.680152616Z" level=info msg="StopPodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\"" May 16 00:52:46.680255 kubelet[4137]: I0516 00:52:46.680219 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pnck2" podStartSLOduration=0.677165067 podStartE2EDuration="5.680204127s" podCreationTimestamp="2025-05-16 00:52:41 +0000 UTC" firstStartedPulling="2025-05-16 00:52:41.51654045 +0000 UTC m=+19.980287557" lastFinishedPulling="2025-05-16 00:52:46.51957955 +0000 UTC m=+24.983326617" observedRunningTime="2025-05-16 00:52:46.67908729 +0000 UTC m=+25.142834397" watchObservedRunningTime="2025-05-16 00:52:46.680204127 +0000 UTC m=+25.143951234" May 16 00:52:46.680347 containerd[2665]: time="2025-05-16T00:52:46.680333063Z" level=info msg="Ensure that sandbox 3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f in task-service has been cleanup successfully" May 16 00:52:46.680370 containerd[2665]: time="2025-05-16T00:52:46.680349380Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:52:46.680470 containerd[2665]: time="2025-05-16T00:52:46.680457520Z" level=info msg="TearDown network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" successfully" May 16 00:52:46.680498 containerd[2665]: time="2025-05-16T00:52:46.680470958Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" returns successfully" May 16 00:52:46.680533 containerd[2665]: time="2025-05-16T00:52:46.680515790Z" level=info msg="TearDown network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" successfully" May 16 00:52:46.680559 containerd[2665]: time="2025-05-16T00:52:46.680533346Z" level=info msg="StopPodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" returns successfully" May 16 00:52:46.680855 containerd[2665]: time="2025-05-16T00:52:46.680838371Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" May 16 00:52:46.680927 containerd[2665]: time="2025-05-16T00:52:46.680846729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:3,}" May 16 00:52:46.681013 containerd[2665]: time="2025-05-16T00:52:46.680922995Z" level=info msg="TearDown network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" successfully" May 16 00:52:46.681042 containerd[2665]: time="2025-05-16T00:52:46.681015618Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" returns successfully" May 16 00:52:46.681451 containerd[2665]: time="2025-05-16T00:52:46.681429943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:2,}" May 16 00:52:46.712079 containerd[2665]: time="2025-05-16T00:52:46.712027521Z" level=error msg="Failed to destroy network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.712396 containerd[2665]: time="2025-05-16T00:52:46.712373098Z" level=error msg="encountered an error cleaning up failed sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.712448 containerd[2665]: time="2025-05-16T00:52:46.712432567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.712661 kubelet[4137]: E0516 00:52:46.712629 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.712703 kubelet[4137]: E0516 00:52:46.712688 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:46.712734 kubelet[4137]: E0516 00:52:46.712712 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" May 16 00:52:46.712777 kubelet[4137]: E0516 00:52:46.712757 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-7s65s_calico-apiserver(a4ae8cdc-592f-4b69-bcaa-748076a5f292)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" podUID="a4ae8cdc-592f-4b69-bcaa-748076a5f292" May 16 00:52:46.712842 containerd[2665]: time="2025-05-16T00:52:46.712808938Z" level=error msg="Failed to destroy network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.713144 containerd[2665]: time="2025-05-16T00:52:46.713120841Z" level=error msg="encountered an error cleaning up failed sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.713190 containerd[2665]: time="2025-05-16T00:52:46.713174632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.713310 kubelet[4137]: E0516 00:52:46.713286 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.713345 kubelet[4137]: E0516 00:52:46.713327 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:46.713367 kubelet[4137]: E0516 00:52:46.713350 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" May 16 00:52:46.713401 kubelet[4137]: E0516 00:52:46.713383 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c6564c8c8-94d98_calico-system(21480104-b865-4459-9a00-43d79bca056f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" podUID="21480104-b865-4459-9a00-43d79bca056f" May 16 00:52:46.719045 containerd[2665]: time="2025-05-16T00:52:46.719012207Z" level=error msg="Failed to destroy network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.719406 containerd[2665]: time="2025-05-16T00:52:46.719381179Z" level=error msg="encountered an error cleaning up failed sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.719446 containerd[2665]: time="2025-05-16T00:52:46.719434210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.719624 kubelet[4137]: E0516 00:52:46.719590 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.719662 kubelet[4137]: E0516 00:52:46.719640 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:46.719708 kubelet[4137]: E0516 00:52:46.719658 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-hrh6s" May 16 00:52:46.719708 kubelet[4137]: E0516 00:52:46.719693 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:46.721174 containerd[2665]: time="2025-05-16T00:52:46.721139139Z" level=error msg="Failed to destroy network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.721512 containerd[2665]: time="2025-05-16T00:52:46.721488235Z" level=error msg="encountered an error cleaning up failed sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.721556 containerd[2665]: time="2025-05-16T00:52:46.721540545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfc496f9-hrhpn,Uid:d8a55ae1-9984-429b-9b49-8a7e9ec6fb11,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.721693 kubelet[4137]: E0516 00:52:46.721672 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.721717 kubelet[4137]: E0516 00:52:46.721704 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:46.721738 kubelet[4137]: E0516 00:52:46.721719 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dfc496f9-hrhpn" May 16 00:52:46.721782 kubelet[4137]: E0516 00:52:46.721764 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dfc496f9-hrhpn_calico-system(d8a55ae1-9984-429b-9b49-8a7e9ec6fb11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dfc496f9-hrhpn" podUID="d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" May 16 00:52:46.722660 containerd[2665]: time="2025-05-16T00:52:46.722638185Z" level=error msg="Failed to destroy network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.722963 containerd[2665]: time="2025-05-16T00:52:46.722941610Z" level=error msg="encountered an error cleaning up failed sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.723003 containerd[2665]: time="2025-05-16T00:52:46.722989241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.723107 kubelet[4137]: E0516 00:52:46.723084 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.723135 kubelet[4137]: E0516 00:52:46.723121 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:46.723157 kubelet[4137]: E0516 00:52:46.723137 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" May 16 00:52:46.723198 kubelet[4137]: E0516 00:52:46.723179 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf5649dd4-49jjr_calico-apiserver(6a45b85a-e5a7-4f77-b3f6-35027c560e3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" podUID="6a45b85a-e5a7-4f77-b3f6-35027c560e3c" May 16 00:52:46.772789 containerd[2665]: time="2025-05-16T00:52:46.772674777Z" level=error msg="Failed to destroy network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.773076 containerd[2665]: time="2025-05-16T00:52:46.773054867Z" level=error msg="encountered an error cleaning up failed sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.773126 containerd[2665]: time="2025-05-16T00:52:46.773111937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.773338 kubelet[4137]: E0516 00:52:46.773301 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.773375 kubelet[4137]: E0516 00:52:46.773362 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:46.773401 kubelet[4137]: E0516 00:52:46.773382 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbpwb" May 16 00:52:46.773444 kubelet[4137]: E0516 00:52:46.773421 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wbpwb_kube-system(73af8152-9d20-4cac-a7ca-149e696fff87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wbpwb" podUID="73af8152-9d20-4cac-a7ca-149e696fff87" May 16 00:52:46.773867 containerd[2665]: time="2025-05-16T00:52:46.773840604Z" level=error msg="Failed to destroy network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774116 containerd[2665]: time="2025-05-16T00:52:46.774096637Z" level=error msg="Failed to destroy network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774202 containerd[2665]: time="2025-05-16T00:52:46.774109435Z" level=error msg="encountered an error cleaning up failed sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774249 containerd[2665]: time="2025-05-16T00:52:46.774234012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774367 containerd[2665]: time="2025-05-16T00:52:46.774348911Z" level=error msg="encountered an error cleaning up failed sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774393 kubelet[4137]: E0516 00:52:46.774365 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774420 kubelet[4137]: E0516 00:52:46.774401 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65t8z" May 16 00:52:46.774445 containerd[2665]: time="2025-05-16T00:52:46.774390064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774473 kubelet[4137]: E0516 00:52:46.774418 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65t8z" May 16 00:52:46.774473 kubelet[4137]: E0516 00:52:46.774449 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-65t8z_calico-system(f82c5669-93be-4155-9022-550402fea58a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-65t8z_calico-system(f82c5669-93be-4155-9022-550402fea58a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-65t8z" podUID="f82c5669-93be-4155-9022-550402fea58a" May 16 00:52:46.774545 kubelet[4137]: E0516 00:52:46.774515 4137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:52:46.774572 kubelet[4137]: E0516 00:52:46.774559 4137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:46.774597 kubelet[4137]: E0516 00:52:46.774577 4137 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4h2v7" May 16 00:52:46.774623 kubelet[4137]: E0516 00:52:46.774606 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4h2v7_kube-system(977000b3-b558-41f5-b536-60dbfe10c1b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4h2v7" podUID="977000b3-b558-41f5-b536-60dbfe10c1b4" May 16 00:52:46.827766 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 00:52:46.827824 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 00:52:47.587359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940-shm.mount: Deactivated successfully. May 16 00:52:47.587440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f-shm.mount: Deactivated successfully. May 16 00:52:47.587488 systemd[1]: run-netns-cni\x2d29a7c998\x2da165\x2d7bb4\x2da673\x2d7850bf712d06.mount: Deactivated successfully. May 16 00:52:47.587530 systemd[1]: run-netns-cni\x2dc91edc26\x2ddbff\x2dc2bb\x2dff49\x2d8b9b3224fb82.mount: Deactivated successfully. May 16 00:52:47.587569 systemd[1]: run-netns-cni\x2d3f205622\x2daecb\x2d87bc\x2df1f5\x2d113775b76e90.mount: Deactivated successfully. May 16 00:52:47.682194 kubelet[4137]: I0516 00:52:47.682162 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f" May 16 00:52:47.682573 containerd[2665]: time="2025-05-16T00:52:47.682548240Z" level=info msg="StopPodSandbox for \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\"" May 16 00:52:47.682766 containerd[2665]: time="2025-05-16T00:52:47.682735008Z" level=info msg="Ensure that sandbox ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f in task-service has been cleanup successfully" May 16 00:52:47.682923 containerd[2665]: time="2025-05-16T00:52:47.682910538Z" level=info msg="TearDown network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\" successfully" May 16 00:52:47.682945 containerd[2665]: time="2025-05-16T00:52:47.682923616Z" level=info msg="StopPodSandbox for \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\" returns successfully" May 16 00:52:47.683240 containerd[2665]: time="2025-05-16T00:52:47.683224605Z" level=info msg="StopPodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\"" May 16 00:52:47.683298 containerd[2665]: time="2025-05-16T00:52:47.683287634Z" level=info msg="TearDown network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" successfully" May 16 00:52:47.683318 containerd[2665]: time="2025-05-16T00:52:47.683298032Z" level=info msg="StopPodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" returns successfully" May 16 00:52:47.683495 containerd[2665]: time="2025-05-16T00:52:47.683483840Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" May 16 00:52:47.683550 containerd[2665]: time="2025-05-16T00:52:47.683540711Z" level=info msg="TearDown network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" successfully" May 16 00:52:47.683571 containerd[2665]: time="2025-05-16T00:52:47.683550509Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" returns successfully" May 16 00:52:47.683722 kubelet[4137]: I0516 00:52:47.683708 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940" May 16 00:52:47.683752 containerd[2665]: time="2025-05-16T00:52:47.683721480Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:52:47.683821 containerd[2665]: time="2025-05-16T00:52:47.683808345Z" level=info msg="TearDown network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" successfully" May 16 00:52:47.683842 containerd[2665]: time="2025-05-16T00:52:47.683820943Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" returns successfully" May 16 00:52:47.684037 containerd[2665]: time="2025-05-16T00:52:47.684024028Z" level=info msg="StopPodSandbox for \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\"" May 16 00:52:47.684168 containerd[2665]: time="2025-05-16T00:52:47.684156845Z" level=info msg="Ensure that sandbox fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940 in task-service has been cleanup successfully" May 16 00:52:47.684316 containerd[2665]: time="2025-05-16T00:52:47.684304140Z" level=info msg="TearDown network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\" successfully" May 16 00:52:47.684336 containerd[2665]: time="2025-05-16T00:52:47.684316218Z" level=info msg="StopPodSandbox for \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\" returns successfully" May 16 00:52:47.684411 containerd[2665]: time="2025-05-16T00:52:47.684390205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:4,}" May 16 00:52:47.684545 systemd[1]: run-netns-cni\x2d5f2cdcc8\x2dee6f\x2d77b8\x2d03fa\x2d5a18d47d95ad.mount: Deactivated successfully. May 16 00:52:47.684693 containerd[2665]: time="2025-05-16T00:52:47.684603449Z" level=info msg="StopPodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\"" May 16 00:52:47.684693 containerd[2665]: time="2025-05-16T00:52:47.684664598Z" level=info msg="TearDown network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" successfully" May 16 00:52:47.684693 containerd[2665]: time="2025-05-16T00:52:47.684674117Z" level=info msg="StopPodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" returns successfully" May 16 00:52:47.684917 containerd[2665]: time="2025-05-16T00:52:47.684899038Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" May 16 00:52:47.684988 containerd[2665]: time="2025-05-16T00:52:47.684977545Z" level=info msg="TearDown network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" successfully" May 16 00:52:47.685019 containerd[2665]: time="2025-05-16T00:52:47.684988583Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" returns successfully" May 16 00:52:47.685239 containerd[2665]: time="2025-05-16T00:52:47.685222023Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:52:47.685312 containerd[2665]: time="2025-05-16T00:52:47.685301410Z" level=info msg="TearDown network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" successfully" May 16 00:52:47.685333 containerd[2665]: time="2025-05-16T00:52:47.685312288Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" returns successfully" May 16 00:52:47.685426 kubelet[4137]: I0516 00:52:47.685410 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c" May 16 00:52:47.685693 containerd[2665]: time="2025-05-16T00:52:47.685677705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:4,}" May 16 00:52:47.685934 containerd[2665]: time="2025-05-16T00:52:47.685920344Z" level=info msg="StopPodSandbox for \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\"" May 16 00:52:47.686060 containerd[2665]: time="2025-05-16T00:52:47.686047642Z" level=info msg="Ensure that sandbox 0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c in task-service has been cleanup successfully" May 16 00:52:47.686219 containerd[2665]: time="2025-05-16T00:52:47.686207175Z" level=info msg="TearDown network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\" successfully" May 16 00:52:47.686239 containerd[2665]: time="2025-05-16T00:52:47.686220132Z" level=info msg="StopPodSandbox for \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\" returns successfully" May 16 00:52:47.686471 containerd[2665]: time="2025-05-16T00:52:47.686455812Z" level=info msg="StopPodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\"" May 16 00:52:47.686537 containerd[2665]: time="2025-05-16T00:52:47.686528160Z" level=info msg="TearDown network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" successfully" May 16 00:52:47.686557 containerd[2665]: time="2025-05-16T00:52:47.686537998Z" level=info msg="StopPodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" returns successfully" May 16 00:52:47.686597 systemd[1]: run-netns-cni\x2d43fe3243\x2d0389\x2dbccb\x2d5e3d\x2d3b0490c89643.mount: Deactivated successfully. May 16 00:52:47.686713 containerd[2665]: time="2025-05-16T00:52:47.686700650Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" May 16 00:52:47.686782 containerd[2665]: time="2025-05-16T00:52:47.686772638Z" level=info msg="TearDown network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" successfully" May 16 00:52:47.686802 containerd[2665]: time="2025-05-16T00:52:47.686782396Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" returns successfully" May 16 00:52:47.686907 kubelet[4137]: I0516 00:52:47.686895 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a" May 16 00:52:47.686981 containerd[2665]: time="2025-05-16T00:52:47.686969284Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:52:47.687048 containerd[2665]: time="2025-05-16T00:52:47.687037233Z" level=info msg="TearDown network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" successfully" May 16 00:52:47.687069 containerd[2665]: time="2025-05-16T00:52:47.687048431Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" returns successfully" May 16 00:52:47.687247 containerd[2665]: time="2025-05-16T00:52:47.687229960Z" level=info msg="StopPodSandbox for \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\"" May 16 00:52:47.687348 containerd[2665]: time="2025-05-16T00:52:47.687329183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:4,}" May 16 00:52:47.687377 containerd[2665]: time="2025-05-16T00:52:47.687360897Z" level=info msg="Ensure that sandbox 1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a in task-service has been cleanup successfully" May 16 00:52:47.687541 containerd[2665]: time="2025-05-16T00:52:47.687528829Z" level=info msg="TearDown network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\" successfully" May 16 00:52:47.687566 containerd[2665]: time="2025-05-16T00:52:47.687543306Z" level=info msg="StopPodSandbox for \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\" returns successfully" May 16 00:52:47.687800 containerd[2665]: time="2025-05-16T00:52:47.687786704Z" level=info msg="StopPodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\"" May 16 00:52:47.687871 containerd[2665]: time="2025-05-16T00:52:47.687861412Z" level=info msg="TearDown network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" successfully" May 16 00:52:47.687891 containerd[2665]: time="2025-05-16T00:52:47.687873530Z" level=info msg="StopPodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" returns successfully" May 16 00:52:47.688090 containerd[2665]: time="2025-05-16T00:52:47.688073335Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" May 16 00:52:47.688150 containerd[2665]: time="2025-05-16T00:52:47.688141964Z" level=info msg="TearDown network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" successfully" May 16 00:52:47.688177 containerd[2665]: time="2025-05-16T00:52:47.688152082Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" returns successfully" May 16 00:52:47.688280 kubelet[4137]: I0516 00:52:47.688265 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a" May 16 00:52:47.688367 containerd[2665]: time="2025-05-16T00:52:47.688350328Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:52:47.688437 containerd[2665]: time="2025-05-16T00:52:47.688426075Z" level=info msg="TearDown network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" successfully" May 16 00:52:47.688457 containerd[2665]: time="2025-05-16T00:52:47.688437953Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" returns successfully" May 16 00:52:47.688508 containerd[2665]: time="2025-05-16T00:52:47.688495103Z" level=info msg="StopPodSandbox for \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\"" May 16 00:52:47.688575 systemd[1]: run-netns-cni\x2d8a6ad065\x2dcc2e\x2da575\x2db406\x2dc753cdc42612.mount: Deactivated successfully. May 16 00:52:47.688641 containerd[2665]: time="2025-05-16T00:52:47.688629160Z" level=info msg="Ensure that sandbox 896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a in task-service has been cleanup successfully" May 16 00:52:47.688772 containerd[2665]: time="2025-05-16T00:52:47.688756459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:4,}" May 16 00:52:47.688818 containerd[2665]: time="2025-05-16T00:52:47.688805490Z" level=info msg="TearDown network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\" successfully" May 16 00:52:47.688839 containerd[2665]: time="2025-05-16T00:52:47.688819488Z" level=info msg="StopPodSandbox for \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\" returns successfully" May 16 00:52:47.689023 containerd[2665]: time="2025-05-16T00:52:47.689006616Z" level=info msg="StopPodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\"" May 16 00:52:47.689085 containerd[2665]: time="2025-05-16T00:52:47.689073724Z" level=info msg="TearDown network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" successfully" May 16 00:52:47.689105 containerd[2665]: time="2025-05-16T00:52:47.689085002Z" level=info msg="StopPodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" returns successfully" May 16 00:52:47.689285 containerd[2665]: time="2025-05-16T00:52:47.689269851Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" May 16 00:52:47.689400 containerd[2665]: time="2025-05-16T00:52:47.689339559Z" level=info msg="TearDown network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" successfully" May 16 00:52:47.689421 containerd[2665]: time="2025-05-16T00:52:47.689401708Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" returns successfully" May 16 00:52:47.689688 containerd[2665]: time="2025-05-16T00:52:47.689673262Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:52:47.689757 containerd[2665]: time="2025-05-16T00:52:47.689747129Z" level=info msg="TearDown network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" successfully" May 16 00:52:47.689778 containerd[2665]: time="2025-05-16T00:52:47.689757407Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" returns successfully" May 16 00:52:47.689806 kubelet[4137]: I0516 00:52:47.689793 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522" May 16 00:52:47.690102 containerd[2665]: time="2025-05-16T00:52:47.690089591Z" level=info msg="StopPodSandbox for \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\"" May 16 00:52:47.690229 containerd[2665]: time="2025-05-16T00:52:47.690217849Z" level=info msg="Ensure that sandbox 2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522 in task-service has been cleanup successfully" May 16 00:52:47.690392 containerd[2665]: time="2025-05-16T00:52:47.690379541Z" level=info msg="TearDown network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\" successfully" May 16 00:52:47.690412 containerd[2665]: time="2025-05-16T00:52:47.690392019Z" level=info msg="StopPodSandbox for \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\" returns successfully" May 16 00:52:47.690550 systemd[1]: run-netns-cni\x2d85b2c54a\x2d5eca\x2dc2fa\x2d1e50\x2d365abdf6bd6b.mount: Deactivated successfully. May 16 00:52:47.690589 containerd[2665]: time="2025-05-16T00:52:47.690573188Z" level=info msg="StopPodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\"" May 16 00:52:47.690620 systemd[1]: run-netns-cni\x2d59a18d23\x2df425\x2df478\x2ddfc0\x2d222d0943af63.mount: Deactivated successfully. May 16 00:52:47.690649 containerd[2665]: time="2025-05-16T00:52:47.690639617Z" level=info msg="TearDown network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" successfully" May 16 00:52:47.690669 containerd[2665]: time="2025-05-16T00:52:47.690651335Z" level=info msg="StopPodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" returns successfully" May 16 00:52:47.690845 containerd[2665]: time="2025-05-16T00:52:47.690832064Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" May 16 00:52:47.690911 containerd[2665]: time="2025-05-16T00:52:47.690901372Z" level=info msg="TearDown network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" successfully" May 16 00:52:47.690931 containerd[2665]: time="2025-05-16T00:52:47.690911650Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" returns successfully" May 16 00:52:47.691065 containerd[2665]: time="2025-05-16T00:52:47.691047307Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:52:47.691136 containerd[2665]: time="2025-05-16T00:52:47.691125813Z" level=info msg="TearDown network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" successfully" May 16 00:52:47.691156 containerd[2665]: time="2025-05-16T00:52:47.691136132Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" returns successfully" May 16 00:52:47.691210 kubelet[4137]: I0516 00:52:47.691197 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110" May 16 00:52:47.691468 containerd[2665]: time="2025-05-16T00:52:47.691451838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:4,}" May 16 00:52:47.691524 containerd[2665]: time="2025-05-16T00:52:47.691507988Z" level=info msg="StopPodSandbox for \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\"" May 16 00:52:47.691660 containerd[2665]: time="2025-05-16T00:52:47.691646644Z" level=info msg="Ensure that sandbox 75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110 in task-service has been cleanup successfully" May 16 00:52:47.691814 containerd[2665]: time="2025-05-16T00:52:47.691800378Z" level=info msg="TearDown network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\" successfully" May 16 00:52:47.691834 containerd[2665]: time="2025-05-16T00:52:47.691814536Z" level=info msg="StopPodSandbox for \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\" returns successfully" May 16 00:52:47.692016 containerd[2665]: time="2025-05-16T00:52:47.692005303Z" level=info msg="StopPodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\"" May 16 00:52:47.692076 containerd[2665]: time="2025-05-16T00:52:47.692066213Z" level=info msg="TearDown network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" successfully" May 16 00:52:47.692096 containerd[2665]: time="2025-05-16T00:52:47.692076251Z" level=info msg="StopPodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" returns successfully" May 16 00:52:47.692292 containerd[2665]: time="2025-05-16T00:52:47.692276977Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" May 16 00:52:47.692324 kubelet[4137]: I0516 00:52:47.692313 4137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884" May 16 00:52:47.692360 containerd[2665]: time="2025-05-16T00:52:47.692349964Z" level=info msg="TearDown network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" successfully" May 16 00:52:47.692381 containerd[2665]: time="2025-05-16T00:52:47.692360002Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" returns successfully" May 16 00:52:47.692401 kubelet[4137]: I0516 00:52:47.692351 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:52:47.692651 containerd[2665]: time="2025-05-16T00:52:47.692633236Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:52:47.692693 containerd[2665]: time="2025-05-16T00:52:47.692678908Z" level=info msg="StopPodSandbox for \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\"" May 16 00:52:47.692729 containerd[2665]: time="2025-05-16T00:52:47.692715701Z" level=info msg="TearDown network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" successfully" May 16 00:52:47.692754 containerd[2665]: time="2025-05-16T00:52:47.692726660Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" returns successfully" May 16 00:52:47.692825 containerd[2665]: time="2025-05-16T00:52:47.692813285Z" level=info msg="Ensure that sandbox 12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884 in task-service has been cleanup successfully" May 16 00:52:47.692986 containerd[2665]: time="2025-05-16T00:52:47.692973137Z" level=info msg="TearDown network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\" successfully" May 16 00:52:47.693007 containerd[2665]: time="2025-05-16T00:52:47.692986935Z" level=info msg="StopPodSandbox for \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\" returns successfully" May 16 00:52:47.693026 containerd[2665]: time="2025-05-16T00:52:47.693011371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:4,}" May 16 00:52:47.694008 containerd[2665]: time="2025-05-16T00:52:47.693978366Z" level=info msg="StopPodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\"" May 16 00:52:47.694445 containerd[2665]: time="2025-05-16T00:52:47.694250279Z" level=info msg="TearDown network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" successfully" May 16 00:52:47.694445 containerd[2665]: time="2025-05-16T00:52:47.694324586Z" level=info msg="StopPodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" returns successfully" May 16 00:52:47.694707 containerd[2665]: time="2025-05-16T00:52:47.694683405Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" May 16 00:52:47.695568 containerd[2665]: time="2025-05-16T00:52:47.695541018Z" level=info msg="TearDown network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" successfully" May 16 00:52:47.695598 containerd[2665]: time="2025-05-16T00:52:47.695567374Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" returns successfully" May 16 00:52:47.695976 containerd[2665]: time="2025-05-16T00:52:47.695955107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:3,}" May 16 00:52:47.767665 kubelet[4137]: I0516 00:52:47.767622 4137 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cdqr\" (UniqueName: \"kubernetes.io/projected/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-kube-api-access-5cdqr\") pod \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\" (UID: \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\") " May 16 00:52:47.767665 kubelet[4137]: I0516 00:52:47.767673 4137 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-ca-bundle\") pod \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\" (UID: \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\") " May 16 00:52:47.767847 kubelet[4137]: I0516 00:52:47.767699 4137 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-backend-key-pair\") pod \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\" (UID: \"d8a55ae1-9984-429b-9b49-8a7e9ec6fb11\") " May 16 00:52:47.768276 kubelet[4137]: I0516 00:52:47.768044 4137 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" (UID: "d8a55ae1-9984-429b-9b49-8a7e9ec6fb11"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:52:47.769895 kubelet[4137]: I0516 00:52:47.769873 4137 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-kube-api-access-5cdqr" (OuterVolumeSpecName: "kube-api-access-5cdqr") pod "d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" (UID: "d8a55ae1-9984-429b-9b49-8a7e9ec6fb11"). InnerVolumeSpecName "kube-api-access-5cdqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:52:47.770236 kubelet[4137]: I0516 00:52:47.770206 4137 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" (UID: "d8a55ae1-9984-429b-9b49-8a7e9ec6fb11"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:52:47.805224 systemd-networkd[2566]: calid716b4c2acc: Link UP May 16 00:52:47.805513 systemd-networkd[2566]: calid716b4c2acc: Gained carrier May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.718 [INFO][7101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.732 [INFO][7101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0 coredns-668d6bf9bc- kube-system 977000b3-b558-41f5-b536-60dbfe10c1b4 772 0 2025-05-16 00:52:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 coredns-668d6bf9bc-4h2v7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid716b4c2acc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.732 [INFO][7101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" HandleID="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Workload="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" HandleID="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Workload="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c0c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.3-n-16e7659192", "pod":"coredns-668d6bf9bc-4h2v7", "timestamp":"2025-05-16 00:52:47.761594121 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.770 [INFO][7252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.774 [INFO][7252] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.777 [INFO][7252] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.778 [INFO][7252] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.780 [INFO][7252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.780 [INFO][7252] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.781 [INFO][7252] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7 May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.794 [INFO][7252] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.798 [INFO][7252] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.65/26] block=192.168.116.64/26 handle="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.798 [INFO][7252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.65/26] handle="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.798 [INFO][7252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:47.812368 containerd[2665]: 2025-05-16 00:52:47.798 [INFO][7252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.65/26] IPv6=[] ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" HandleID="k8s-pod-network.88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Workload="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.812794 containerd[2665]: 2025-05-16 00:52:47.799 [INFO][7101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"977000b3-b558-41f5-b536-60dbfe10c1b4", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"coredns-668d6bf9bc-4h2v7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid716b4c2acc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:47.812794 containerd[2665]: 2025-05-16 00:52:47.800 [INFO][7101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.65/32] ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.812794 containerd[2665]: 2025-05-16 00:52:47.800 [INFO][7101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid716b4c2acc ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.812794 containerd[2665]: 2025-05-16 00:52:47.805 [INFO][7101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.812794 containerd[2665]: 2025-05-16 00:52:47.805 [INFO][7101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"977000b3-b558-41f5-b536-60dbfe10c1b4", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7", Pod:"coredns-668d6bf9bc-4h2v7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid716b4c2acc", MAC:"f6:2b:9e:52:94:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:47.812794 containerd[2665]: 2025-05-16 00:52:47.811 [INFO][7101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7" Namespace="kube-system" Pod="coredns-668d6bf9bc-4h2v7" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--4h2v7-eth0" May 16 00:52:47.827054 containerd[2665]: time="2025-05-16T00:52:47.826682509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:47.827054 containerd[2665]: time="2025-05-16T00:52:47.827040968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:47.827131 containerd[2665]: time="2025-05-16T00:52:47.827053286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:47.827150 containerd[2665]: time="2025-05-16T00:52:47.827124274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:47.849928 systemd[1]: Started cri-containerd-88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7.scope - libcontainer container 88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7. May 16 00:52:47.868129 kubelet[4137]: I0516 00:52:47.868108 4137 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-ca-bundle\") on node \"ci-4152.2.3-n-16e7659192\" DevicePath \"\"" May 16 00:52:47.868164 kubelet[4137]: I0516 00:52:47.868130 4137 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-whisker-backend-key-pair\") on node \"ci-4152.2.3-n-16e7659192\" DevicePath \"\"" May 16 00:52:47.868164 kubelet[4137]: I0516 00:52:47.868141 4137 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cdqr\" (UniqueName: \"kubernetes.io/projected/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11-kube-api-access-5cdqr\") on node \"ci-4152.2.3-n-16e7659192\" DevicePath \"\"" May 16 00:52:47.872435 containerd[2665]: time="2025-05-16T00:52:47.872398610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4h2v7,Uid:977000b3-b558-41f5-b536-60dbfe10c1b4,Namespace:kube-system,Attempt:4,} returns sandbox id \"88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7\"" May 16 00:52:47.874233 containerd[2665]: time="2025-05-16T00:52:47.874210820Z" level=info msg="CreateContainer within sandbox \"88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:52:47.879497 containerd[2665]: time="2025-05-16T00:52:47.879472680Z" level=info msg="CreateContainer within sandbox \"88aa1fe7906851ba8d6367a850d459f2116e6b01236d57b2d91a31ce85077dd7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12123926028e71270fb88f57197cbf6293bbdc087173709d1b8516823c5bb432\"" May 16 00:52:47.880561 containerd[2665]: time="2025-05-16T00:52:47.880539578Z" level=info msg="StartContainer for \"12123926028e71270fb88f57197cbf6293bbdc087173709d1b8516823c5bb432\"" May 16 00:52:47.890020 systemd-networkd[2566]: calie07b90c7b2c: Link UP May 16 00:52:47.890177 systemd-networkd[2566]: calie07b90c7b2c: Gained carrier May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.710 [INFO][7041] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.728 [INFO][7041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0 calico-kube-controllers-7c6564c8c8- calico-system 21480104-b865-4459-9a00-43d79bca056f 776 0 2025-05-16 00:52:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c6564c8c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 calico-kube-controllers-7c6564c8c8-94d98 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie07b90c7b2c [] [] }} ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.729 [INFO][7041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" HandleID="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" HandleID="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000722790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.3-n-16e7659192", "pod":"calico-kube-controllers-7c6564c8c8-94d98", "timestamp":"2025-05-16 00:52:47.76160316 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.798 [INFO][7226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.798 [INFO][7226] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.872 [INFO][7226] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.875 [INFO][7226] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.878 [INFO][7226] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.879 [INFO][7226] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.880 [INFO][7226] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.881 [INFO][7226] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.881 [INFO][7226] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.884 [INFO][7226] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.887 [INFO][7226] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.66/26] block=192.168.116.64/26 handle="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.887 [INFO][7226] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.66/26] handle="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" host="ci-4152.2.3-n-16e7659192" May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.887 [INFO][7226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:47.898696 containerd[2665]: 2025-05-16 00:52:47.887 [INFO][7226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.66/26] IPv6=[] ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" HandleID="k8s-pod-network.5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.899141 containerd[2665]: 2025-05-16 00:52:47.888 [INFO][7041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0", GenerateName:"calico-kube-controllers-7c6564c8c8-", Namespace:"calico-system", SelfLink:"", UID:"21480104-b865-4459-9a00-43d79bca056f", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6564c8c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"calico-kube-controllers-7c6564c8c8-94d98", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie07b90c7b2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:47.899141 containerd[2665]: 2025-05-16 00:52:47.888 [INFO][7041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.66/32] ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.899141 containerd[2665]: 2025-05-16 00:52:47.888 [INFO][7041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie07b90c7b2c ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.899141 containerd[2665]: 2025-05-16 00:52:47.890 [INFO][7041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.899141 containerd[2665]: 2025-05-16 00:52:47.891 [INFO][7041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0", GenerateName:"calico-kube-controllers-7c6564c8c8-", Namespace:"calico-system", SelfLink:"", UID:"21480104-b865-4459-9a00-43d79bca056f", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6564c8c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d", Pod:"calico-kube-controllers-7c6564c8c8-94d98", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie07b90c7b2c", MAC:"aa:4a:51:68:ab:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:47.899141 containerd[2665]: 2025-05-16 00:52:47.897 [INFO][7041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d" Namespace="calico-system" Pod="calico-kube-controllers-7c6564c8c8-94d98" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--kube--controllers--7c6564c8c8--94d98-eth0" May 16 00:52:47.908896 systemd[1]: Started cri-containerd-12123926028e71270fb88f57197cbf6293bbdc087173709d1b8516823c5bb432.scope - libcontainer container 12123926028e71270fb88f57197cbf6293bbdc087173709d1b8516823c5bb432. May 16 00:52:47.911386 containerd[2665]: time="2025-05-16T00:52:47.911322113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:47.911410 containerd[2665]: time="2025-05-16T00:52:47.911383183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:47.911410 containerd[2665]: time="2025-05-16T00:52:47.911395181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:47.911490 containerd[2665]: time="2025-05-16T00:52:47.911476447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:47.920557 systemd[1]: Started cri-containerd-5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d.scope - libcontainer container 5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d. May 16 00:52:47.926169 containerd[2665]: time="2025-05-16T00:52:47.926134620Z" level=info msg="StartContainer for \"12123926028e71270fb88f57197cbf6293bbdc087173709d1b8516823c5bb432\" returns successfully" May 16 00:52:47.943547 containerd[2665]: time="2025-05-16T00:52:47.943517767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6564c8c8-94d98,Uid:21480104-b865-4459-9a00-43d79bca056f,Namespace:calico-system,Attempt:4,} returns sandbox id \"5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d\"" May 16 00:52:47.944588 containerd[2665]: time="2025-05-16T00:52:47.944568787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 00:52:47.992732 systemd-networkd[2566]: cali476600b34df: Link UP May 16 00:52:47.992981 systemd-networkd[2566]: cali476600b34df: Gained carrier May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.717 [INFO][7090] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.728 [INFO][7090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0 coredns-668d6bf9bc- kube-system 73af8152-9d20-4cac-a7ca-149e696fff87 779 0 2025-05-16 00:52:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 coredns-668d6bf9bc-wbpwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali476600b34df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.729 [INFO][7090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" HandleID="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Workload="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" HandleID="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Workload="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000721f00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.3-n-16e7659192", "pod":"coredns-668d6bf9bc-wbpwb", "timestamp":"2025-05-16 00:52:47.761596561 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.887 [INFO][7223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.887 [INFO][7223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.972 [INFO][7223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.977 [INFO][7223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.980 [INFO][7223] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.981 [INFO][7223] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.982 [INFO][7223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.982 [INFO][7223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.983 [INFO][7223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993 May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.986 [INFO][7223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.989 [INFO][7223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.67/26] block=192.168.116.64/26 handle="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.989 [INFO][7223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.67/26] handle="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.989 [INFO][7223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:48.001346 containerd[2665]: 2025-05-16 00:52:47.989 [INFO][7223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.67/26] IPv6=[] ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" HandleID="k8s-pod-network.a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Workload="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.001808 containerd[2665]: 2025-05-16 00:52:47.991 [INFO][7090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73af8152-9d20-4cac-a7ca-149e696fff87", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"coredns-668d6bf9bc-wbpwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali476600b34df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.001808 containerd[2665]: 2025-05-16 00:52:47.991 [INFO][7090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.67/32] ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.001808 containerd[2665]: 2025-05-16 00:52:47.991 [INFO][7090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali476600b34df ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.001808 containerd[2665]: 2025-05-16 00:52:47.993 [INFO][7090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.001808 containerd[2665]: 2025-05-16 00:52:47.993 [INFO][7090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73af8152-9d20-4cac-a7ca-149e696fff87", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993", Pod:"coredns-668d6bf9bc-wbpwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali476600b34df", MAC:"ce:d8:7b:1c:18:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.001808 containerd[2665]: 2025-05-16 00:52:47.999 [INFO][7090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbpwb" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-coredns--668d6bf9bc--wbpwb-eth0" May 16 00:52:48.016001 containerd[2665]: time="2025-05-16T00:52:48.013762932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:48.016001 containerd[2665]: time="2025-05-16T00:52:48.014061884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:48.016001 containerd[2665]: time="2025-05-16T00:52:48.014075122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.016001 containerd[2665]: time="2025-05-16T00:52:48.014158349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.048920 systemd[1]: Started cri-containerd-a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993.scope - libcontainer container a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993. May 16 00:52:48.073864 containerd[2665]: time="2025-05-16T00:52:48.073824822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbpwb,Uid:73af8152-9d20-4cac-a7ca-149e696fff87,Namespace:kube-system,Attempt:4,} returns sandbox id \"a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993\"" May 16 00:52:48.075888 containerd[2665]: time="2025-05-16T00:52:48.075861135Z" level=info msg="CreateContainer within sandbox \"a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:52:48.088737 containerd[2665]: time="2025-05-16T00:52:48.088586655Z" level=info msg="CreateContainer within sandbox \"a98dc4ebdb3b19e12f9e688e618597cd6c6c79afe40a4bc724bbfaa83b8dd993\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cce729588c7dc388a6cebab720dd6f6c2cefc7cf541b5f3ddc21ca668ff1deb\"" May 16 00:52:48.092783 containerd[2665]: time="2025-05-16T00:52:48.089356292Z" level=info msg="StartContainer for \"6cce729588c7dc388a6cebab720dd6f6c2cefc7cf541b5f3ddc21ca668ff1deb\"" May 16 00:52:48.104773 systemd-networkd[2566]: cali5af7741489e: Link UP May 16 00:52:48.104959 systemd-networkd[2566]: cali5af7741489e: Gained carrier May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.711 [INFO][7055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.728 [INFO][7055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0 goldmane-78d55f7ddc- calico-system aed452f7-5298-4a6e-bf6b-b06604585632 782 0 2025-05-16 00:52:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 goldmane-78d55f7ddc-hrh6s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5af7741489e [] [] }} ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.729 [INFO][7055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" HandleID="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Workload="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" HandleID="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Workload="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483030), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.3-n-16e7659192", "pod":"goldmane-78d55f7ddc-hrh6s", "timestamp":"2025-05-16 00:52:47.761595241 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.989 [INFO][7216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:47.990 [INFO][7216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.072 [INFO][7216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.077 [INFO][7216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.080 [INFO][7216] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.085 [INFO][7216] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.091 [INFO][7216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.091 [INFO][7216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.094 [INFO][7216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693 May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.098 [INFO][7216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.101 [INFO][7216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.68/26] block=192.168.116.64/26 handle="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.101 [INFO][7216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.68/26] handle="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.101 [INFO][7216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:48.111921 containerd[2665]: 2025-05-16 00:52:48.101 [INFO][7216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.68/26] IPv6=[] ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" HandleID="k8s-pod-network.ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Workload="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.112390 containerd[2665]: 2025-05-16 00:52:48.103 [INFO][7055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"aed452f7-5298-4a6e-bf6b-b06604585632", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"goldmane-78d55f7ddc-hrh6s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5af7741489e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.112390 containerd[2665]: 2025-05-16 00:52:48.103 [INFO][7055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.68/32] ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.112390 containerd[2665]: 2025-05-16 00:52:48.103 [INFO][7055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5af7741489e ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.112390 containerd[2665]: 2025-05-16 00:52:48.105 [INFO][7055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.112390 containerd[2665]: 2025-05-16 00:52:48.105 [INFO][7055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"aed452f7-5298-4a6e-bf6b-b06604585632", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693", Pod:"goldmane-78d55f7ddc-hrh6s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5af7741489e", MAC:"b2:ec:a6:c4:15:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.112390 containerd[2665]: 2025-05-16 00:52:48.110 [INFO][7055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693" Namespace="calico-system" Pod="goldmane-78d55f7ddc-hrh6s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-goldmane--78d55f7ddc--hrh6s-eth0" May 16 00:52:48.122901 systemd[1]: Started cri-containerd-6cce729588c7dc388a6cebab720dd6f6c2cefc7cf541b5f3ddc21ca668ff1deb.scope - libcontainer container 6cce729588c7dc388a6cebab720dd6f6c2cefc7cf541b5f3ddc21ca668ff1deb. May 16 00:52:48.124278 containerd[2665]: time="2025-05-16T00:52:48.124208463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:48.124278 containerd[2665]: time="2025-05-16T00:52:48.124269374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:48.124340 containerd[2665]: time="2025-05-16T00:52:48.124280172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.124375 containerd[2665]: time="2025-05-16T00:52:48.124357240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.132751 kernel: bpftool[7757]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 16 00:52:48.134195 systemd[1]: Started cri-containerd-ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693.scope - libcontainer container ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693. May 16 00:52:48.141272 containerd[2665]: time="2025-05-16T00:52:48.141240492Z" level=info msg="StartContainer for \"6cce729588c7dc388a6cebab720dd6f6c2cefc7cf541b5f3ddc21ca668ff1deb\" returns successfully" May 16 00:52:48.157557 containerd[2665]: time="2025-05-16T00:52:48.157527641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-hrh6s,Uid:aed452f7-5298-4a6e-bf6b-b06604585632,Namespace:calico-system,Attempt:4,} returns sandbox id \"ec31dd95b3e9656f09d514e4309388b6aefe219e0b072f940fbf3cd42b388693\"" May 16 00:52:48.206311 systemd-networkd[2566]: cali28a3ae6d5fa: Link UP May 16 00:52:48.206723 systemd-networkd[2566]: cali28a3ae6d5fa: Gained carrier May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:47.717 [INFO][7093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:47.728 [INFO][7093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0 csi-node-driver- calico-system f82c5669-93be-4155-9022-550402fea58a 663 0 2025-05-16 00:52:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 csi-node-driver-65t8z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28a3ae6d5fa [] [] }} ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:47.729 [INFO][7093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" HandleID="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Workload="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" HandleID="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Workload="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000454770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.3-n-16e7659192", "pod":"csi-node-driver-65t8z", "timestamp":"2025-05-16 00:52:47.761595801 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.101 [INFO][7221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.102 [INFO][7221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.173 [INFO][7221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.177 [INFO][7221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.180 [INFO][7221] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.181 [INFO][7221] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.183 [INFO][7221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.183 [INFO][7221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.184 [INFO][7221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63 May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.186 [INFO][7221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.203 [INFO][7221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.69/26] block=192.168.116.64/26 handle="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.203 [INFO][7221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.69/26] handle="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.203 [INFO][7221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:48.217914 containerd[2665]: 2025-05-16 00:52:48.203 [INFO][7221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.69/26] IPv6=[] ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" HandleID="k8s-pod-network.20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Workload="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.218415 containerd[2665]: 2025-05-16 00:52:48.205 [INFO][7093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f82c5669-93be-4155-9022-550402fea58a", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"csi-node-driver-65t8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a3ae6d5fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.218415 containerd[2665]: 2025-05-16 00:52:48.205 [INFO][7093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.69/32] ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.218415 containerd[2665]: 2025-05-16 00:52:48.205 [INFO][7093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28a3ae6d5fa ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.218415 containerd[2665]: 2025-05-16 00:52:48.206 [INFO][7093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.218415 containerd[2665]: 2025-05-16 00:52:48.206 [INFO][7093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f82c5669-93be-4155-9022-550402fea58a", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63", Pod:"csi-node-driver-65t8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a3ae6d5fa", MAC:"fe:af:8f:35:32:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.218415 containerd[2665]: 2025-05-16 00:52:48.214 [INFO][7093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63" Namespace="calico-system" Pod="csi-node-driver-65t8z" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-csi--node--driver--65t8z-eth0" May 16 00:52:48.230815 containerd[2665]: time="2025-05-16T00:52:48.229491103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:48.230815 containerd[2665]: time="2025-05-16T00:52:48.229543974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:48.230815 containerd[2665]: time="2025-05-16T00:52:48.229565691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.230815 containerd[2665]: time="2025-05-16T00:52:48.229944630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.254887 systemd[1]: Started cri-containerd-20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63.scope - libcontainer container 20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63. May 16 00:52:48.271801 containerd[2665]: time="2025-05-16T00:52:48.271773283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65t8z,Uid:f82c5669-93be-4155-9022-550402fea58a,Namespace:calico-system,Attempt:3,} returns sandbox id \"20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63\"" May 16 00:52:48.295113 systemd-networkd[2566]: vxlan.calico: Link UP May 16 00:52:48.295118 systemd-networkd[2566]: vxlan.calico: Gained carrier May 16 00:52:48.299415 systemd-networkd[2566]: cali50e9269f860: Link UP May 16 00:52:48.301375 systemd-networkd[2566]: cali50e9269f860: Gained carrier May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:47.712 [INFO][7050] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:47.728 [INFO][7050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0 calico-apiserver-5cf5649dd4- calico-apiserver a4ae8cdc-592f-4b69-bcaa-748076a5f292 778 0 2025-05-16 00:52:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cf5649dd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 calico-apiserver-5cf5649dd4-7s65s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50e9269f860 [] [] }} ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:47.729 [INFO][7050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" HandleID="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" HandleID="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.3-n-16e7659192", "pod":"calico-apiserver-5cf5649dd4-7s65s", "timestamp":"2025-05-16 00:52:47.761597841 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.203 [INFO][7218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.203 [INFO][7218] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.273 [INFO][7218] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.277 [INFO][7218] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.281 [INFO][7218] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.282 [INFO][7218] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.284 [INFO][7218] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.284 [INFO][7218] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.285 [INFO][7218] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4 May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.288 [INFO][7218] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.292 [INFO][7218] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.70/26] block=192.168.116.64/26 handle="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.292 [INFO][7218] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.70/26] handle="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.292 [INFO][7218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:48.307900 containerd[2665]: 2025-05-16 00:52:48.292 [INFO][7218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.70/26] IPv6=[] ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" HandleID="k8s-pod-network.1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.308372 containerd[2665]: 2025-05-16 00:52:48.294 [INFO][7050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0", GenerateName:"calico-apiserver-5cf5649dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4ae8cdc-592f-4b69-bcaa-748076a5f292", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf5649dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"calico-apiserver-5cf5649dd4-7s65s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50e9269f860", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.308372 containerd[2665]: 2025-05-16 00:52:48.295 [INFO][7050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.70/32] ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.308372 containerd[2665]: 2025-05-16 00:52:48.295 [INFO][7050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50e9269f860 ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.308372 containerd[2665]: 2025-05-16 00:52:48.300 [INFO][7050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.308372 containerd[2665]: 2025-05-16 00:52:48.301 [INFO][7050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0", GenerateName:"calico-apiserver-5cf5649dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4ae8cdc-592f-4b69-bcaa-748076a5f292", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf5649dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4", Pod:"calico-apiserver-5cf5649dd4-7s65s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50e9269f860", MAC:"82:41:7f:22:0e:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.308372 containerd[2665]: 2025-05-16 00:52:48.306 [INFO][7050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-7s65s" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--7s65s-eth0" May 16 00:52:48.330462 containerd[2665]: time="2025-05-16T00:52:48.330379966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:48.330462 containerd[2665]: time="2025-05-16T00:52:48.330448635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:48.330509 containerd[2665]: time="2025-05-16T00:52:48.330460513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.330559 containerd[2665]: time="2025-05-16T00:52:48.330542700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.360901 systemd[1]: Started cri-containerd-1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4.scope - libcontainer container 1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4. May 16 00:52:48.384270 containerd[2665]: time="2025-05-16T00:52:48.384236611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-7s65s,Uid:a4ae8cdc-592f-4b69-bcaa-748076a5f292,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4\"" May 16 00:52:48.396903 systemd-networkd[2566]: cali7b4f2982437: Link UP May 16 00:52:48.397286 systemd-networkd[2566]: cali7b4f2982437: Gained carrier May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:47.712 [INFO][7066] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:47.728 [INFO][7066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0 calico-apiserver-5cf5649dd4- calico-apiserver 6a45b85a-e5a7-4f77-b3f6-35027c560e3c 780 0 2025-05-16 00:52:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cf5649dd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 calico-apiserver-5cf5649dd4-49jjr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b4f2982437 [] [] }} ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:47.729 [INFO][7066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:47.761 [INFO][7224] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" HandleID="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7224] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" HandleID="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dcf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.3-n-16e7659192", "pod":"calico-apiserver-5cf5649dd4-49jjr", "timestamp":"2025-05-16 00:52:47.761594201 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:47.762 [INFO][7224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.292 [INFO][7224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.292 [INFO][7224] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.373 [INFO][7224] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.378 [INFO][7224] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.382 [INFO][7224] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.383 [INFO][7224] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.385 [INFO][7224] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.385 [INFO][7224] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.386 [INFO][7224] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9 May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.388 [INFO][7224] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.393 [INFO][7224] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.71/26] block=192.168.116.64/26 handle="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.393 [INFO][7224] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.71/26] handle="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" host="ci-4152.2.3-n-16e7659192" May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.393 [INFO][7224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:48.404857 containerd[2665]: 2025-05-16 00:52:48.393 [INFO][7224] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.71/26] IPv6=[] ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" HandleID="k8s-pod-network.bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Workload="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.405358 containerd[2665]: 2025-05-16 00:52:48.395 [INFO][7066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0", GenerateName:"calico-apiserver-5cf5649dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a45b85a-e5a7-4f77-b3f6-35027c560e3c", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf5649dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"calico-apiserver-5cf5649dd4-49jjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b4f2982437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.405358 containerd[2665]: 2025-05-16 00:52:48.395 [INFO][7066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.71/32] ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.405358 containerd[2665]: 2025-05-16 00:52:48.395 [INFO][7066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b4f2982437 ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.405358 containerd[2665]: 2025-05-16 00:52:48.397 [INFO][7066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.405358 containerd[2665]: 2025-05-16 00:52:48.397 [INFO][7066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0", GenerateName:"calico-apiserver-5cf5649dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a45b85a-e5a7-4f77-b3f6-35027c560e3c", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf5649dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9", Pod:"calico-apiserver-5cf5649dd4-49jjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b4f2982437", MAC:"a6:58:9c:68:93:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:48.405358 containerd[2665]: 2025-05-16 00:52:48.403 [INFO][7066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf5649dd4-49jjr" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-calico--apiserver--5cf5649dd4--49jjr-eth0" May 16 00:52:48.417969 containerd[2665]: time="2025-05-16T00:52:48.417908812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:48.418011 containerd[2665]: time="2025-05-16T00:52:48.417967202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:48.418011 containerd[2665]: time="2025-05-16T00:52:48.417978921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.418090 containerd[2665]: time="2025-05-16T00:52:48.418053029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:48.440888 systemd[1]: Started cri-containerd-bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9.scope - libcontainer container bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9. May 16 00:52:48.464537 containerd[2665]: time="2025-05-16T00:52:48.464498462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf5649dd4-49jjr,Uid:6a45b85a-e5a7-4f77-b3f6-35027c560e3c,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9\"" May 16 00:52:48.597459 systemd[1]: run-netns-cni\x2db9e2d9d0\x2ddcc9\x2dc001\x2d4a88\x2dadbe7fe9ec16.mount: Deactivated successfully. May 16 00:52:48.597540 systemd[1]: run-netns-cni\x2d8301e9b3\x2d18b3\x2d3c30\x2dbbd8\x2d0e45883379cd.mount: Deactivated successfully. May 16 00:52:48.597585 systemd[1]: run-netns-cni\x2d307da801\x2daa3e\x2db023\x2d2ee1\x2df83a289497c7.mount: Deactivated successfully. May 16 00:52:48.597632 systemd[1]: var-lib-kubelet-pods-d8a55ae1\x2d9984\x2d429b\x2d9b49\x2d8a7e9ec6fb11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5cdqr.mount: Deactivated successfully. May 16 00:52:48.597682 systemd[1]: var-lib-kubelet-pods-d8a55ae1\x2d9984\x2d429b\x2d9b49\x2d8a7e9ec6fb11-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 00:52:48.641077 containerd[2665]: time="2025-05-16T00:52:48.641027077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 16 00:52:48.641077 containerd[2665]: time="2025-05-16T00:52:48.641034356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:48.641858 containerd[2665]: time="2025-05-16T00:52:48.641828149Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:48.643646 containerd[2665]: time="2025-05-16T00:52:48.643619902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:48.644372 containerd[2665]: time="2025-05-16T00:52:48.644345985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 699.749322ms" May 16 00:52:48.644396 containerd[2665]: time="2025-05-16T00:52:48.644376260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 16 00:52:48.647383 containerd[2665]: time="2025-05-16T00:52:48.647358822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:52:48.651834 containerd[2665]: time="2025-05-16T00:52:48.651808069Z" level=info msg="CreateContainer within sandbox \"5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 00:52:48.656635 containerd[2665]: time="2025-05-16T00:52:48.656606179Z" level=info msg="CreateContainer within sandbox \"5183cf2d445ecc05c3d55b169928fc689670203de08bdb58c8752b837e719e9d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8169d036b71a76aa468395d6fbc641a9c2f4726777197ea12dad10761186f2b6\"" May 16 00:52:48.656988 containerd[2665]: time="2025-05-16T00:52:48.656958923Z" level=info msg="StartContainer for \"8169d036b71a76aa468395d6fbc641a9c2f4726777197ea12dad10761186f2b6\"" May 16 00:52:48.669136 containerd[2665]: time="2025-05-16T00:52:48.669104216Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:52:48.669378 containerd[2665]: time="2025-05-16T00:52:48.669354215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:52:48.669430 containerd[2665]: time="2025-05-16T00:52:48.669404127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:52:48.669572 kubelet[4137]: E0516 00:52:48.669530 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:52:48.669646 kubelet[4137]: E0516 00:52:48.669586 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:52:48.669837 containerd[2665]: time="2025-05-16T00:52:48.669814662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 00:52:48.670470 kubelet[4137]: E0516 00:52:48.669883 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns2w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:52:48.671608 kubelet[4137]: E0516 00:52:48.671579 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:48.687861 systemd[1]: Started cri-containerd-8169d036b71a76aa468395d6fbc641a9c2f4726777197ea12dad10761186f2b6.scope - libcontainer container 8169d036b71a76aa468395d6fbc641a9c2f4726777197ea12dad10761186f2b6. May 16 00:52:48.698780 kubelet[4137]: E0516 00:52:48.698751 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:48.709463 systemd[1]: Removed slice kubepods-besteffort-podd8a55ae1_9984_429b_9b49_8a7e9ec6fb11.slice - libcontainer container kubepods-besteffort-podd8a55ae1_9984_429b_9b49_8a7e9ec6fb11.slice. May 16 00:52:48.713623 kubelet[4137]: I0516 00:52:48.713555 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4h2v7" podStartSLOduration=19.713538851 podStartE2EDuration="19.713538851s" podCreationTimestamp="2025-05-16 00:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:52:48.713404432 +0000 UTC m=+27.177151579" watchObservedRunningTime="2025-05-16 00:52:48.713538851 +0000 UTC m=+27.177285918" May 16 00:52:48.720601 containerd[2665]: time="2025-05-16T00:52:48.720569684Z" level=info msg="StartContainer for \"8169d036b71a76aa468395d6fbc641a9c2f4726777197ea12dad10761186f2b6\" returns successfully" May 16 00:52:48.722732 kubelet[4137]: I0516 00:52:48.722689 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wbpwb" podStartSLOduration=19.722674386 podStartE2EDuration="19.722674386s" podCreationTimestamp="2025-05-16 00:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:52:48.722591759 +0000 UTC m=+27.186338826" watchObservedRunningTime="2025-05-16 00:52:48.722674386 +0000 UTC m=+27.186421493" May 16 00:52:48.763995 systemd[1]: Created slice kubepods-besteffort-pod4134b6f6_00a5_4952_b6e3_dd3becf66cc3.slice - libcontainer container kubepods-besteffort-pod4134b6f6_00a5_4952_b6e3_dd3becf66cc3.slice. May 16 00:52:48.874368 kubelet[4137]: I0516 00:52:48.874334 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4134b6f6-00a5-4952-b6e3-dd3becf66cc3-whisker-backend-key-pair\") pod \"whisker-849858999-52vcl\" (UID: \"4134b6f6-00a5-4952-b6e3-dd3becf66cc3\") " pod="calico-system/whisker-849858999-52vcl" May 16 00:52:48.874534 kubelet[4137]: I0516 00:52:48.874375 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4134b6f6-00a5-4952-b6e3-dd3becf66cc3-whisker-ca-bundle\") pod \"whisker-849858999-52vcl\" (UID: \"4134b6f6-00a5-4952-b6e3-dd3becf66cc3\") " pod="calico-system/whisker-849858999-52vcl" May 16 00:52:48.874534 kubelet[4137]: I0516 00:52:48.874399 4137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6q6v\" (UniqueName: \"kubernetes.io/projected/4134b6f6-00a5-4952-b6e3-dd3becf66cc3-kube-api-access-m6q6v\") pod \"whisker-849858999-52vcl\" (UID: \"4134b6f6-00a5-4952-b6e3-dd3becf66cc3\") " pod="calico-system/whisker-849858999-52vcl" May 16 00:52:48.946218 containerd[2665]: time="2025-05-16T00:52:48.946143035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:48.946272 containerd[2665]: time="2025-05-16T00:52:48.946203866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 16 00:52:48.946926 containerd[2665]: time="2025-05-16T00:52:48.946904153Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:48.948769 containerd[2665]: time="2025-05-16T00:52:48.948738779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:48.949453 containerd[2665]: time="2025-05-16T00:52:48.949425829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 279.572333ms" May 16 00:52:48.949476 containerd[2665]: time="2025-05-16T00:52:48.949457864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 16 00:52:48.950229 containerd[2665]: time="2025-05-16T00:52:48.950208904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:52:48.951156 containerd[2665]: time="2025-05-16T00:52:48.951132076Z" level=info msg="CreateContainer within sandbox \"20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 00:52:48.958936 containerd[2665]: time="2025-05-16T00:52:48.958867275Z" level=info msg="CreateContainer within sandbox \"20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"84e453533c7b205c0477d80d2d3aadbc2597ce7d5f0ea9cf1bdc4f00e240c5a2\"" May 16 00:52:48.959687 containerd[2665]: time="2025-05-16T00:52:48.959308924Z" level=info msg="StartContainer for \"84e453533c7b205c0477d80d2d3aadbc2597ce7d5f0ea9cf1bdc4f00e240c5a2\"" May 16 00:52:48.996926 systemd[1]: Started cri-containerd-84e453533c7b205c0477d80d2d3aadbc2597ce7d5f0ea9cf1bdc4f00e240c5a2.scope - libcontainer container 84e453533c7b205c0477d80d2d3aadbc2597ce7d5f0ea9cf1bdc4f00e240c5a2. May 16 00:52:49.018029 containerd[2665]: time="2025-05-16T00:52:49.017994608Z" level=info msg="StartContainer for \"84e453533c7b205c0477d80d2d3aadbc2597ce7d5f0ea9cf1bdc4f00e240c5a2\" returns successfully" May 16 00:52:49.066697 containerd[2665]: time="2025-05-16T00:52:49.066651534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849858999-52vcl,Uid:4134b6f6-00a5-4952-b6e3-dd3becf66cc3,Namespace:calico-system,Attempt:0,}" May 16 00:52:49.163974 systemd-networkd[2566]: calib0f2ded3b41: Link UP May 16 00:52:49.164234 systemd-networkd[2566]: calib0f2ded3b41: Gained carrier May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.097 [INFO][8334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0 whisker-849858999- calico-system 4134b6f6-00a5-4952-b6e3-dd3becf66cc3 939 0 2025-05-16 00:52:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:849858999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4152.2.3-n-16e7659192 whisker-849858999-52vcl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0f2ded3b41 [] [] }} ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.097 [INFO][8334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.119 [INFO][8361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" HandleID="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Workload="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.119 [INFO][8361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" HandleID="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Workload="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000366680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.3-n-16e7659192", "pod":"whisker-849858999-52vcl", "timestamp":"2025-05-16 00:52:49.11956438 +0000 UTC"}, Hostname:"ci-4152.2.3-n-16e7659192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.119 [INFO][8361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.119 [INFO][8361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.119 [INFO][8361] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.3-n-16e7659192' May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.127 [INFO][8361] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.130 [INFO][8361] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.134 [INFO][8361] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.135 [INFO][8361] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.137 [INFO][8361] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.137 [INFO][8361] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.138 [INFO][8361] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2 May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.155 [INFO][8361] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.159 [INFO][8361] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.72/26] block=192.168.116.64/26 handle="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.160 [INFO][8361] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.72/26] handle="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" host="ci-4152.2.3-n-16e7659192" May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.160 [INFO][8361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:52:49.171919 containerd[2665]: 2025-05-16 00:52:49.160 [INFO][8361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.72/26] IPv6=[] ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" HandleID="k8s-pod-network.9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Workload="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.172342 containerd[2665]: 2025-05-16 00:52:49.161 [INFO][8334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0", GenerateName:"whisker-849858999-", Namespace:"calico-system", SelfLink:"", UID:"4134b6f6-00a5-4952-b6e3-dd3becf66cc3", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"849858999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"", Pod:"whisker-849858999-52vcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0f2ded3b41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:49.172342 containerd[2665]: 2025-05-16 00:52:49.161 [INFO][8334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.72/32] ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.172342 containerd[2665]: 2025-05-16 00:52:49.161 [INFO][8334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0f2ded3b41 ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.172342 containerd[2665]: 2025-05-16 00:52:49.164 [INFO][8334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.172342 containerd[2665]: 2025-05-16 00:52:49.165 [INFO][8334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0", GenerateName:"whisker-849858999-", Namespace:"calico-system", SelfLink:"", UID:"4134b6f6-00a5-4952-b6e3-dd3becf66cc3", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 52, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"849858999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.3-n-16e7659192", ContainerID:"9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2", Pod:"whisker-849858999-52vcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0f2ded3b41", MAC:"02:7b:08:e7:73:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:52:49.172342 containerd[2665]: 2025-05-16 00:52:49.170 [INFO][8334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2" Namespace="calico-system" Pod="whisker-849858999-52vcl" WorkloadEndpoint="ci--4152.2.3--n--16e7659192-k8s-whisker--849858999--52vcl-eth0" May 16 00:52:49.175645 systemd-networkd[2566]: cali476600b34df: Gained IPv6LL May 16 00:52:49.184674 containerd[2665]: time="2025-05-16T00:52:49.184601124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:52:49.184674 containerd[2665]: time="2025-05-16T00:52:49.184660995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:52:49.184674 containerd[2665]: time="2025-05-16T00:52:49.184672034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:49.184793 containerd[2665]: time="2025-05-16T00:52:49.184753261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:52:49.211932 systemd[1]: Started cri-containerd-9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2.scope - libcontainer container 9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2. May 16 00:52:49.237962 containerd[2665]: time="2025-05-16T00:52:49.237932988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849858999-52vcl,Uid:4134b6f6-00a5-4952-b6e3-dd3becf66cc3,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a83cefff9083191d47943210d4e33c19375a284b5a8ca2d2a38d839e1d103b2\"" May 16 00:52:49.300852 systemd-networkd[2566]: calid716b4c2acc: Gained IPv6LL May 16 00:52:49.606048 containerd[2665]: time="2025-05-16T00:52:49.605952068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:49.606048 containerd[2665]: time="2025-05-16T00:52:49.606000541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 16 00:52:49.606819 containerd[2665]: time="2025-05-16T00:52:49.606798301Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:49.608637 containerd[2665]: time="2025-05-16T00:52:49.608614668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:49.609133 kubelet[4137]: I0516 00:52:49.609102 4137 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a55ae1-9984-429b-9b49-8a7e9ec6fb11" path="/var/lib/kubelet/pods/d8a55ae1-9984-429b-9b49-8a7e9ec6fb11/volumes" May 16 00:52:49.609477 containerd[2665]: time="2025-05-16T00:52:49.609452862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 659.216842ms" May 16 00:52:49.609504 containerd[2665]: time="2025-05-16T00:52:49.609479778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 16 00:52:49.610231 containerd[2665]: time="2025-05-16T00:52:49.610210188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:52:49.611120 containerd[2665]: time="2025-05-16T00:52:49.611099015Z" level=info msg="CreateContainer within sandbox \"1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:52:49.627466 containerd[2665]: time="2025-05-16T00:52:49.627438479Z" level=info msg="CreateContainer within sandbox \"1acc494dee61f9cf74f5f09bfaeada362fa7a037797980d6d849cebe82e510c4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f5052e36fa6ea1bd137178b24752e11239172b3aae9cf05aef3330dd30964969\"" May 16 00:52:49.627805 containerd[2665]: time="2025-05-16T00:52:49.627779227Z" level=info msg="StartContainer for \"f5052e36fa6ea1bd137178b24752e11239172b3aae9cf05aef3330dd30964969\"" May 16 00:52:49.664861 systemd[1]: Started cri-containerd-f5052e36fa6ea1bd137178b24752e11239172b3aae9cf05aef3330dd30964969.scope - libcontainer container f5052e36fa6ea1bd137178b24752e11239172b3aae9cf05aef3330dd30964969. May 16 00:52:49.682746 containerd[2665]: time="2025-05-16T00:52:49.682711570Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:49.682858 containerd[2665]: time="2025-05-16T00:52:49.682735487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 16 00:52:49.685825 systemd-networkd[2566]: cali5af7741489e: Gained IPv6LL May 16 00:52:49.686203 containerd[2665]: time="2025-05-16T00:52:49.686021633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 75.770171ms" May 16 00:52:49.686203 containerd[2665]: time="2025-05-16T00:52:49.686064666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 16 00:52:49.686055 systemd-networkd[2566]: cali50e9269f860: Gained IPv6LL May 16 00:52:49.687994 containerd[2665]: time="2025-05-16T00:52:49.687964501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 00:52:49.688578 containerd[2665]: time="2025-05-16T00:52:49.688547773Z" level=info msg="CreateContainer within sandbox \"bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:52:49.689987 containerd[2665]: time="2025-05-16T00:52:49.689961880Z" level=info msg="StartContainer for \"f5052e36fa6ea1bd137178b24752e11239172b3aae9cf05aef3330dd30964969\" returns successfully" May 16 00:52:49.693524 containerd[2665]: time="2025-05-16T00:52:49.693495829Z" level=info msg="CreateContainer within sandbox \"bdba9b491505fc41b97bfb44c61795ddd4637601e7d6d6987c0db9ff1df677c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9b5c3f742b055b28edb5e80608a7b339f75929014844e7a33cbe26e51a4855c0\"" May 16 00:52:49.693856 containerd[2665]: time="2025-05-16T00:52:49.693832779Z" level=info msg="StartContainer for \"9b5c3f742b055b28edb5e80608a7b339f75929014844e7a33cbe26e51a4855c0\"" May 16 00:52:49.715722 kubelet[4137]: E0516 00:52:49.715688 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:52:49.725903 kubelet[4137]: I0516 00:52:49.717030 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cf5649dd4-7s65s" podStartSLOduration=11.492070473 podStartE2EDuration="12.717015574s" podCreationTimestamp="2025-05-16 00:52:37 +0000 UTC" firstStartedPulling="2025-05-16 00:52:48.385173781 +0000 UTC m=+26.848920848" lastFinishedPulling="2025-05-16 00:52:49.610118882 +0000 UTC m=+28.073865949" observedRunningTime="2025-05-16 00:52:49.716724658 +0000 UTC m=+28.180471764" watchObservedRunningTime="2025-05-16 00:52:49.717015574 +0000 UTC m=+28.180762681" May 16 00:52:49.725903 kubelet[4137]: I0516 00:52:49.724361 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c6564c8c8-94d98" podStartSLOduration=8.021504807 podStartE2EDuration="8.724346512s" podCreationTimestamp="2025-05-16 00:52:41 +0000 UTC" firstStartedPulling="2025-05-16 00:52:47.944357783 +0000 UTC m=+26.408104890" lastFinishedPulling="2025-05-16 00:52:48.647199488 +0000 UTC m=+27.110946595" observedRunningTime="2025-05-16 00:52:49.723885501 +0000 UTC m=+28.187632608" watchObservedRunningTime="2025-05-16 00:52:49.724346512 +0000 UTC m=+28.188093619" May 16 00:52:49.725885 systemd[1]: Started cri-containerd-9b5c3f742b055b28edb5e80608a7b339f75929014844e7a33cbe26e51a4855c0.scope - libcontainer container 9b5c3f742b055b28edb5e80608a7b339f75929014844e7a33cbe26e51a4855c0. May 16 00:52:49.748824 systemd-networkd[2566]: cali28a3ae6d5fa: Gained IPv6LL May 16 00:52:49.753628 containerd[2665]: time="2025-05-16T00:52:49.753564520Z" level=info msg="StartContainer for \"9b5c3f742b055b28edb5e80608a7b339f75929014844e7a33cbe26e51a4855c0\" returns successfully" May 16 00:52:49.876822 systemd-networkd[2566]: vxlan.calico: Gained IPv6LL May 16 00:52:49.940870 systemd-networkd[2566]: calie07b90c7b2c: Gained IPv6LL May 16 00:52:50.033983 containerd[2665]: time="2025-05-16T00:52:50.033951765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:50.034067 containerd[2665]: time="2025-05-16T00:52:50.034028914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 16 00:52:50.034736 containerd[2665]: time="2025-05-16T00:52:50.034717697Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:50.036516 containerd[2665]: time="2025-05-16T00:52:50.036496727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:52:50.037208 containerd[2665]: time="2025-05-16T00:52:50.037187549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 349.187494ms" May 16 00:52:50.037236 containerd[2665]: time="2025-05-16T00:52:50.037214865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 16 00:52:50.037955 containerd[2665]: time="2025-05-16T00:52:50.037939363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:52:50.039081 containerd[2665]: time="2025-05-16T00:52:50.039058286Z" level=info msg="CreateContainer within sandbox \"20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 00:52:50.044619 containerd[2665]: time="2025-05-16T00:52:50.044593186Z" level=info msg="CreateContainer within sandbox \"20f9ad95dc39b87f72390cdbc623a53954f5a06e7e109b339540db325c393f63\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e8c71cddc8f893c6251eb4b06528e84e70510d04cd350c85cd3d851eeec102d\"" May 16 00:52:50.044955 containerd[2665]: time="2025-05-16T00:52:50.044933498Z" level=info msg="StartContainer for \"8e8c71cddc8f893c6251eb4b06528e84e70510d04cd350c85cd3d851eeec102d\"" May 16 00:52:50.061969 containerd[2665]: time="2025-05-16T00:52:50.061931542Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:52:50.062215 containerd[2665]: time="2025-05-16T00:52:50.062194905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:52:50.062281 containerd[2665]: time="2025-05-16T00:52:50.062258376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:52:50.062382 kubelet[4137]: E0516 00:52:50.062349 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:52:50.062458 kubelet[4137]: E0516 00:52:50.062392 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:52:50.062517 kubelet[4137]: E0516 00:52:50.062487 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8533133c9db34d9e9fc5ed2e01534719,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:52:50.064060 containerd[2665]: time="2025-05-16T00:52:50.064046164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:52:50.072858 systemd[1]: Started cri-containerd-8e8c71cddc8f893c6251eb4b06528e84e70510d04cd350c85cd3d851eeec102d.scope - libcontainer container 8e8c71cddc8f893c6251eb4b06528e84e70510d04cd350c85cd3d851eeec102d. May 16 00:52:50.086784 containerd[2665]: time="2025-05-16T00:52:50.086739246Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:52:50.087028 containerd[2665]: time="2025-05-16T00:52:50.086999090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:52:50.087077 containerd[2665]: time="2025-05-16T00:52:50.087044643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:52:50.089232 kubelet[4137]: E0516 00:52:50.089187 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:52:50.089305 kubelet[4137]: E0516 00:52:50.089249 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:52:50.089405 kubelet[4137]: E0516 00:52:50.089369 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:52:50.091013 kubelet[4137]: E0516 00:52:50.090971 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:52:50.095644 containerd[2665]: time="2025-05-16T00:52:50.095613036Z" level=info msg="StartContainer for \"8e8c71cddc8f893c6251eb4b06528e84e70510d04cd350c85cd3d851eeec102d\" returns successfully" May 16 00:52:50.196820 systemd-networkd[2566]: cali7b4f2982437: Gained IPv6LL May 16 00:52:50.658363 kubelet[4137]: I0516 00:52:50.658341 4137 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 00:52:50.658363 kubelet[4137]: I0516 00:52:50.658369 4137 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 00:52:50.720925 kubelet[4137]: I0516 00:52:50.720901 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:52:50.721235 kubelet[4137]: I0516 00:52:50.720979 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:52:50.721669 kubelet[4137]: E0516 00:52:50.721637 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:52:50.727048 kubelet[4137]: I0516 00:52:50.726399 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cf5649dd4-49jjr" podStartSLOduration=12.504798697 podStartE2EDuration="13.726386107s" podCreationTimestamp="2025-05-16 00:52:37 +0000 UTC" firstStartedPulling="2025-05-16 00:52:48.465355724 +0000 UTC m=+26.929102831" lastFinishedPulling="2025-05-16 00:52:49.686943134 +0000 UTC m=+28.150690241" observedRunningTime="2025-05-16 00:52:50.726054634 +0000 UTC m=+29.189801741" watchObservedRunningTime="2025-05-16 00:52:50.726386107 +0000 UTC m=+29.190133214" May 16 00:52:50.741137 kubelet[4137]: I0516 00:52:50.741088 4137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-65t8z" podStartSLOduration=7.976128964 podStartE2EDuration="9.741072438s" podCreationTimestamp="2025-05-16 00:52:41 +0000 UTC" firstStartedPulling="2025-05-16 00:52:48.272878266 +0000 UTC m=+26.736625373" lastFinishedPulling="2025-05-16 00:52:50.03782174 +0000 UTC m=+28.501568847" observedRunningTime="2025-05-16 00:52:50.733877972 +0000 UTC m=+29.197625159" watchObservedRunningTime="2025-05-16 00:52:50.741072438 +0000 UTC m=+29.204819545" May 16 00:52:51.028866 systemd-networkd[2566]: calib0f2ded3b41: Gained IPv6LL May 16 00:52:52.016377 kubelet[4137]: I0516 00:52:52.016338 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:53:02.313168 kubelet[4137]: I0516 00:53:02.313121 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:53:02.607394 containerd[2665]: time="2025-05-16T00:53:02.607281301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:53:02.646926 containerd[2665]: time="2025-05-16T00:53:02.646875089Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:53:02.652087 containerd[2665]: time="2025-05-16T00:53:02.652049513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:53:02.652152 containerd[2665]: time="2025-05-16T00:53:02.652096630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:53:02.652270 kubelet[4137]: E0516 00:53:02.652231 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:53:02.652347 kubelet[4137]: E0516 00:53:02.652273 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:53:02.652514 containerd[2665]: time="2025-05-16T00:53:02.652492045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:53:02.652573 kubelet[4137]: E0516 00:53:02.652516 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns2w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:53:02.653679 kubelet[4137]: E0516 00:53:02.653658 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:53:02.682197 containerd[2665]: time="2025-05-16T00:53:02.682165197Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:53:02.682427 containerd[2665]: time="2025-05-16T00:53:02.682401662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:53:02.682490 containerd[2665]: time="2025-05-16T00:53:02.682451019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:53:02.682587 kubelet[4137]: E0516 00:53:02.682547 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:53:02.682626 kubelet[4137]: E0516 00:53:02.682595 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:53:02.682738 kubelet[4137]: E0516 00:53:02.682705 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8533133c9db34d9e9fc5ed2e01534719,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:53:02.684315 containerd[2665]: time="2025-05-16T00:53:02.684297139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:53:02.706527 containerd[2665]: time="2025-05-16T00:53:02.706490897Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:53:02.706781 containerd[2665]: time="2025-05-16T00:53:02.706747961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:53:02.706807 containerd[2665]: time="2025-05-16T00:53:02.706749480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:53:02.706926 kubelet[4137]: E0516 00:53:02.706895 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:53:02.706964 kubelet[4137]: E0516 00:53:02.706930 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:53:02.707038 kubelet[4137]: E0516 00:53:02.707004 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:53:02.708904 kubelet[4137]: E0516 00:53:02.708869 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:53:04.166926 kubelet[4137]: I0516 00:53:04.166880 4137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:53:14.607443 kubelet[4137]: E0516 00:53:14.607317 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:53:17.612614 kubelet[4137]: E0516 00:53:17.612555 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:53:21.599970 containerd[2665]: time="2025-05-16T00:53:21.599926046Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:53:21.600327 containerd[2665]: time="2025-05-16T00:53:21.600028124Z" level=info msg="TearDown network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" successfully" May 16 00:53:21.600327 containerd[2665]: time="2025-05-16T00:53:21.600038044Z" level=info msg="StopPodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" returns successfully" May 16 00:53:21.600327 containerd[2665]: time="2025-05-16T00:53:21.600308918Z" level=info msg="RemovePodSandbox for \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:53:21.600400 containerd[2665]: time="2025-05-16T00:53:21.600345598Z" level=info msg="Forcibly stopping sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\"" May 16 00:53:21.600423 containerd[2665]: time="2025-05-16T00:53:21.600412836Z" level=info msg="TearDown network for sandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" successfully" May 16 00:53:21.602109 containerd[2665]: time="2025-05-16T00:53:21.602086725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.602146 containerd[2665]: time="2025-05-16T00:53:21.602135724Z" level=info msg="RemovePodSandbox \"8411186e5295c055455f68d5a67a6f2de45dbf2b60fa274638b5ba552fd5900f\" returns successfully" May 16 00:53:21.602381 containerd[2665]: time="2025-05-16T00:53:21.602362239Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" May 16 00:53:21.602457 containerd[2665]: time="2025-05-16T00:53:21.602444278Z" level=info msg="TearDown network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" successfully" May 16 00:53:21.602480 containerd[2665]: time="2025-05-16T00:53:21.602458157Z" level=info msg="StopPodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" returns successfully" May 16 00:53:21.602625 containerd[2665]: time="2025-05-16T00:53:21.602609235Z" level=info msg="RemovePodSandbox for \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" May 16 00:53:21.602648 containerd[2665]: time="2025-05-16T00:53:21.602631194Z" level=info msg="Forcibly stopping sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\"" May 16 00:53:21.602692 containerd[2665]: time="2025-05-16T00:53:21.602682713Z" level=info msg="TearDown network for sandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" successfully" May 16 00:53:21.603961 containerd[2665]: time="2025-05-16T00:53:21.603938729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.603992 containerd[2665]: time="2025-05-16T00:53:21.603981408Z" level=info msg="RemovePodSandbox \"e1fc782d5605460bd364be0159ea0b96fa5f670e5fddf611f12dcaf9a85af88e\" returns successfully" May 16 00:53:21.604186 containerd[2665]: time="2025-05-16T00:53:21.604169605Z" level=info msg="StopPodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\"" May 16 00:53:21.604259 containerd[2665]: time="2025-05-16T00:53:21.604248683Z" level=info msg="TearDown network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" successfully" May 16 00:53:21.604283 containerd[2665]: time="2025-05-16T00:53:21.604259123Z" level=info msg="StopPodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" returns successfully" May 16 00:53:21.604434 containerd[2665]: time="2025-05-16T00:53:21.604419880Z" level=info msg="RemovePodSandbox for \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\"" May 16 00:53:21.604454 containerd[2665]: time="2025-05-16T00:53:21.604440520Z" level=info msg="Forcibly stopping sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\"" May 16 00:53:21.604510 containerd[2665]: time="2025-05-16T00:53:21.604500199Z" level=info msg="TearDown network for sandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" successfully" May 16 00:53:21.605774 containerd[2665]: time="2025-05-16T00:53:21.605752815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.605841 containerd[2665]: time="2025-05-16T00:53:21.605794734Z" level=info msg="RemovePodSandbox \"999c478f0f26119c11530df94e59a01d01b0de2840319c4d5a8e57bce3acd1b1\" returns successfully" May 16 00:53:21.606000 containerd[2665]: time="2025-05-16T00:53:21.605982930Z" level=info msg="StopPodSandbox for \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\"" May 16 00:53:21.606061 containerd[2665]: time="2025-05-16T00:53:21.606049689Z" level=info msg="TearDown network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\" successfully" May 16 00:53:21.606084 containerd[2665]: time="2025-05-16T00:53:21.606060049Z" level=info msg="StopPodSandbox for \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\" returns successfully" May 16 00:53:21.606244 containerd[2665]: time="2025-05-16T00:53:21.606231166Z" level=info msg="RemovePodSandbox for \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\"" May 16 00:53:21.606266 containerd[2665]: time="2025-05-16T00:53:21.606249285Z" level=info msg="Forcibly stopping sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\"" May 16 00:53:21.606318 containerd[2665]: time="2025-05-16T00:53:21.606308844Z" level=info msg="TearDown network for sandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\" successfully" May 16 00:53:21.607494 containerd[2665]: time="2025-05-16T00:53:21.607476182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.607525 containerd[2665]: time="2025-05-16T00:53:21.607514581Z" level=info msg="RemovePodSandbox \"75321f20fa15602213777fefd83ac7b4a188f955c249645393696bb43bca8110\" returns successfully" May 16 00:53:21.607692 containerd[2665]: time="2025-05-16T00:53:21.607679898Z" level=info msg="StopPodSandbox for \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\"" May 16 00:53:21.607760 containerd[2665]: time="2025-05-16T00:53:21.607749177Z" level=info msg="TearDown network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\" successfully" May 16 00:53:21.607783 containerd[2665]: time="2025-05-16T00:53:21.607760176Z" level=info msg="StopPodSandbox for \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\" returns successfully" May 16 00:53:21.607926 containerd[2665]: time="2025-05-16T00:53:21.607914133Z" level=info msg="RemovePodSandbox for \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\"" May 16 00:53:21.607949 containerd[2665]: time="2025-05-16T00:53:21.607930813Z" level=info msg="Forcibly stopping sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\"" May 16 00:53:21.607988 containerd[2665]: time="2025-05-16T00:53:21.607978492Z" level=info msg="TearDown network for sandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\" successfully" May 16 00:53:21.609237 containerd[2665]: time="2025-05-16T00:53:21.609213029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.609266 containerd[2665]: time="2025-05-16T00:53:21.609258388Z" level=info msg="RemovePodSandbox \"896024bad3149c5e92e0e5a2161e4acd953efee5ac6eb7c2ac73082b399d0a5a\" returns successfully" May 16 00:53:21.609478 containerd[2665]: time="2025-05-16T00:53:21.609458824Z" level=info msg="StopPodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\"" May 16 00:53:21.609548 containerd[2665]: time="2025-05-16T00:53:21.609535983Z" level=info msg="TearDown network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" successfully" May 16 00:53:21.609548 containerd[2665]: time="2025-05-16T00:53:21.609546182Z" level=info msg="StopPodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" returns successfully" May 16 00:53:21.609722 containerd[2665]: time="2025-05-16T00:53:21.609704899Z" level=info msg="RemovePodSandbox for \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\"" May 16 00:53:21.609758 containerd[2665]: time="2025-05-16T00:53:21.609726219Z" level=info msg="Forcibly stopping sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\"" May 16 00:53:21.609803 containerd[2665]: time="2025-05-16T00:53:21.609790818Z" level=info msg="TearDown network for sandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" successfully" May 16 00:53:21.611042 containerd[2665]: time="2025-05-16T00:53:21.611020434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.611073 containerd[2665]: time="2025-05-16T00:53:21.611061393Z" level=info msg="RemovePodSandbox \"5ac7dadb9cbd50f335045fc65b7b03a087746783ed7c43dbde647e383e80c0c5\" returns successfully" May 16 00:53:21.611293 containerd[2665]: time="2025-05-16T00:53:21.611275989Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:53:21.611356 containerd[2665]: time="2025-05-16T00:53:21.611342868Z" level=info msg="TearDown network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" successfully" May 16 00:53:21.611380 containerd[2665]: time="2025-05-16T00:53:21.611353988Z" level=info msg="StopPodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" returns successfully" May 16 00:53:21.611545 containerd[2665]: time="2025-05-16T00:53:21.611527225Z" level=info msg="RemovePodSandbox for \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:53:21.611569 containerd[2665]: time="2025-05-16T00:53:21.611549264Z" level=info msg="Forcibly stopping sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\"" May 16 00:53:21.611623 containerd[2665]: time="2025-05-16T00:53:21.611610543Z" level=info msg="TearDown network for sandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" successfully" May 16 00:53:21.612892 containerd[2665]: time="2025-05-16T00:53:21.612869599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.612949 containerd[2665]: time="2025-05-16T00:53:21.612936158Z" level=info msg="RemovePodSandbox \"117e4f6281ea67129b7f943536b24ce57ca5d11be678dbcc6aa83d01faea8af0\" returns successfully" May 16 00:53:21.613149 containerd[2665]: time="2025-05-16T00:53:21.613133114Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" May 16 00:53:21.613213 containerd[2665]: time="2025-05-16T00:53:21.613201033Z" level=info msg="TearDown network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" successfully" May 16 00:53:21.613213 containerd[2665]: time="2025-05-16T00:53:21.613210913Z" level=info msg="StopPodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" returns successfully" May 16 00:53:21.613421 containerd[2665]: time="2025-05-16T00:53:21.613404509Z" level=info msg="RemovePodSandbox for \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" May 16 00:53:21.613447 containerd[2665]: time="2025-05-16T00:53:21.613424908Z" level=info msg="Forcibly stopping sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\"" May 16 00:53:21.613491 containerd[2665]: time="2025-05-16T00:53:21.613478947Z" level=info msg="TearDown network for sandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" successfully" May 16 00:53:21.614832 containerd[2665]: time="2025-05-16T00:53:21.614808842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.614865 containerd[2665]: time="2025-05-16T00:53:21.614854241Z" level=info msg="RemovePodSandbox \"3c325a6870873642e3b5916f92b624fe276a45157668e378da87f98886de6a27\" returns successfully" May 16 00:53:21.615062 containerd[2665]: time="2025-05-16T00:53:21.615045318Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" May 16 00:53:21.615131 containerd[2665]: time="2025-05-16T00:53:21.615118996Z" level=info msg="TearDown network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" successfully" May 16 00:53:21.615131 containerd[2665]: time="2025-05-16T00:53:21.615129356Z" level=info msg="StopPodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" returns successfully" May 16 00:53:21.615321 containerd[2665]: time="2025-05-16T00:53:21.615303233Z" level=info msg="RemovePodSandbox for \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" May 16 00:53:21.615345 containerd[2665]: time="2025-05-16T00:53:21.615323992Z" level=info msg="Forcibly stopping sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\"" May 16 00:53:21.615393 containerd[2665]: time="2025-05-16T00:53:21.615381071Z" level=info msg="TearDown network for sandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" successfully" May 16 00:53:21.616620 containerd[2665]: time="2025-05-16T00:53:21.616594368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.616669 containerd[2665]: time="2025-05-16T00:53:21.616637847Z" level=info msg="RemovePodSandbox \"1ac1f1804ee90a4039c2f85fdbf8b757fa092817097a18cf957c5a7def3fb57c\" returns successfully" May 16 00:53:21.616850 containerd[2665]: time="2025-05-16T00:53:21.616829364Z" level=info msg="StopPodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\"" May 16 00:53:21.616908 containerd[2665]: time="2025-05-16T00:53:21.616895242Z" level=info msg="TearDown network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" successfully" May 16 00:53:21.616931 containerd[2665]: time="2025-05-16T00:53:21.616905522Z" level=info msg="StopPodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" returns successfully" May 16 00:53:21.617067 containerd[2665]: time="2025-05-16T00:53:21.617054919Z" level=info msg="RemovePodSandbox for \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\"" May 16 00:53:21.617088 containerd[2665]: time="2025-05-16T00:53:21.617073199Z" level=info msg="Forcibly stopping sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\"" May 16 00:53:21.617135 containerd[2665]: time="2025-05-16T00:53:21.617125518Z" level=info msg="TearDown network for sandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" successfully" May 16 00:53:21.618367 containerd[2665]: time="2025-05-16T00:53:21.618347215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.618404 containerd[2665]: time="2025-05-16T00:53:21.618393574Z" level=info msg="RemovePodSandbox \"3986a348c2b8c4281bcf4c76a084d63b2de2b5ced3aa6723c95cc1c565da638f\" returns successfully" May 16 00:53:21.618577 containerd[2665]: time="2025-05-16T00:53:21.618564331Z" level=info msg="StopPodSandbox for \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\"" May 16 00:53:21.618637 containerd[2665]: time="2025-05-16T00:53:21.618627689Z" level=info msg="TearDown network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\" successfully" May 16 00:53:21.618657 containerd[2665]: time="2025-05-16T00:53:21.618637329Z" level=info msg="StopPodSandbox for \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\" returns successfully" May 16 00:53:21.618833 containerd[2665]: time="2025-05-16T00:53:21.618816686Z" level=info msg="RemovePodSandbox for \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\"" May 16 00:53:21.618853 containerd[2665]: time="2025-05-16T00:53:21.618839285Z" level=info msg="Forcibly stopping sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\"" May 16 00:53:21.618914 containerd[2665]: time="2025-05-16T00:53:21.618903484Z" level=info msg="TearDown network for sandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\" successfully" May 16 00:53:21.620193 containerd[2665]: time="2025-05-16T00:53:21.620170060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.620234 containerd[2665]: time="2025-05-16T00:53:21.620215019Z" level=info msg="RemovePodSandbox \"12b6e4c9cbb7375fc0f1ddb10992aa5ff086ddb0f6160faf66bec2c59faf3884\" returns successfully" May 16 00:53:21.620450 containerd[2665]: time="2025-05-16T00:53:21.620433815Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:53:21.620522 containerd[2665]: time="2025-05-16T00:53:21.620510653Z" level=info msg="TearDown network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" successfully" May 16 00:53:21.620543 containerd[2665]: time="2025-05-16T00:53:21.620522253Z" level=info msg="StopPodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" returns successfully" May 16 00:53:21.620709 containerd[2665]: time="2025-05-16T00:53:21.620693770Z" level=info msg="RemovePodSandbox for \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:53:21.620731 containerd[2665]: time="2025-05-16T00:53:21.620714250Z" level=info msg="Forcibly stopping sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\"" May 16 00:53:21.620789 containerd[2665]: time="2025-05-16T00:53:21.620780208Z" level=info msg="TearDown network for sandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" successfully" May 16 00:53:21.622008 containerd[2665]: time="2025-05-16T00:53:21.621985265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.622037 containerd[2665]: time="2025-05-16T00:53:21.622026825Z" level=info msg="RemovePodSandbox \"0219e9deb44f1e018f0b8d7062283b849335cafa60a58c745db01b05b14a93ca\" returns successfully" May 16 00:53:21.622221 containerd[2665]: time="2025-05-16T00:53:21.622203861Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" May 16 00:53:21.622287 containerd[2665]: time="2025-05-16T00:53:21.622274820Z" level=info msg="TearDown network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" successfully" May 16 00:53:21.622309 containerd[2665]: time="2025-05-16T00:53:21.622285380Z" level=info msg="StopPodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" returns successfully" May 16 00:53:21.622478 containerd[2665]: time="2025-05-16T00:53:21.622462176Z" level=info msg="RemovePodSandbox for \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" May 16 00:53:21.622500 containerd[2665]: time="2025-05-16T00:53:21.622483776Z" level=info msg="Forcibly stopping sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\"" May 16 00:53:21.622552 containerd[2665]: time="2025-05-16T00:53:21.622537935Z" level=info msg="TearDown network for sandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" successfully" May 16 00:53:21.623808 containerd[2665]: time="2025-05-16T00:53:21.623785191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.623860 containerd[2665]: time="2025-05-16T00:53:21.623830110Z" level=info msg="RemovePodSandbox \"3030e75b517992b8b8bff9cb6b2db421b602d808c3bd0c891f0d65cc968a19ab\" returns successfully" May 16 00:53:21.624065 containerd[2665]: time="2025-05-16T00:53:21.624049706Z" level=info msg="StopPodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\"" May 16 00:53:21.624127 containerd[2665]: time="2025-05-16T00:53:21.624117105Z" level=info msg="TearDown network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" successfully" May 16 00:53:21.624150 containerd[2665]: time="2025-05-16T00:53:21.624127345Z" level=info msg="StopPodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" returns successfully" May 16 00:53:21.624333 containerd[2665]: time="2025-05-16T00:53:21.624318541Z" level=info msg="RemovePodSandbox for \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\"" May 16 00:53:21.624353 containerd[2665]: time="2025-05-16T00:53:21.624338580Z" level=info msg="Forcibly stopping sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\"" May 16 00:53:21.624407 containerd[2665]: time="2025-05-16T00:53:21.624396299Z" level=info msg="TearDown network for sandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" successfully" May 16 00:53:21.625638 containerd[2665]: time="2025-05-16T00:53:21.625619156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.625672 containerd[2665]: time="2025-05-16T00:53:21.625663115Z" level=info msg="RemovePodSandbox \"3c46429af802bfe95957609cf05fac699790dda72177790b9c115d246cad7cc1\" returns successfully" May 16 00:53:21.625862 containerd[2665]: time="2025-05-16T00:53:21.625848912Z" level=info msg="StopPodSandbox for \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\"" May 16 00:53:21.625927 containerd[2665]: time="2025-05-16T00:53:21.625916430Z" level=info msg="TearDown network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\" successfully" May 16 00:53:21.625948 containerd[2665]: time="2025-05-16T00:53:21.625927190Z" level=info msg="StopPodSandbox for \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\" returns successfully" May 16 00:53:21.626114 containerd[2665]: time="2025-05-16T00:53:21.626099547Z" level=info msg="RemovePodSandbox for \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\"" May 16 00:53:21.626135 containerd[2665]: time="2025-05-16T00:53:21.626121787Z" level=info msg="Forcibly stopping sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\"" May 16 00:53:21.626197 containerd[2665]: time="2025-05-16T00:53:21.626187625Z" level=info msg="TearDown network for sandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\" successfully" May 16 00:53:21.627461 containerd[2665]: time="2025-05-16T00:53:21.627438201Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.627497 containerd[2665]: time="2025-05-16T00:53:21.627484961Z" level=info msg="RemovePodSandbox \"fe570dcd5568818fc357db53a8f196c6bbd6af290d4e1469388a0bf262932940\" returns successfully" May 16 00:53:21.627687 containerd[2665]: time="2025-05-16T00:53:21.627670277Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:53:21.627753 containerd[2665]: time="2025-05-16T00:53:21.627736116Z" level=info msg="TearDown network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" successfully" May 16 00:53:21.627779 containerd[2665]: time="2025-05-16T00:53:21.627750835Z" level=info msg="StopPodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" returns successfully" May 16 00:53:21.627966 containerd[2665]: time="2025-05-16T00:53:21.627950272Z" level=info msg="RemovePodSandbox for \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:53:21.627987 containerd[2665]: time="2025-05-16T00:53:21.627967591Z" level=info msg="Forcibly stopping sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\"" May 16 00:53:21.628027 containerd[2665]: time="2025-05-16T00:53:21.628016150Z" level=info msg="TearDown network for sandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" successfully" May 16 00:53:21.629236 containerd[2665]: time="2025-05-16T00:53:21.629214808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.629266 containerd[2665]: time="2025-05-16T00:53:21.629256127Z" level=info msg="RemovePodSandbox \"8bdefcf34208c870aed7e4757b347c03ca30b37aabceb3f817dd6836ebf3ffef\" returns successfully" May 16 00:53:21.629461 containerd[2665]: time="2025-05-16T00:53:21.629446443Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" May 16 00:53:21.629525 containerd[2665]: time="2025-05-16T00:53:21.629514642Z" level=info msg="TearDown network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" successfully" May 16 00:53:21.629545 containerd[2665]: time="2025-05-16T00:53:21.629524722Z" level=info msg="StopPodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" returns successfully" May 16 00:53:21.629715 containerd[2665]: time="2025-05-16T00:53:21.629699278Z" level=info msg="RemovePodSandbox for \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" May 16 00:53:21.629738 containerd[2665]: time="2025-05-16T00:53:21.629720758Z" level=info msg="Forcibly stopping sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\"" May 16 00:53:21.629798 containerd[2665]: time="2025-05-16T00:53:21.629788277Z" level=info msg="TearDown network for sandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" successfully" May 16 00:53:21.631003 containerd[2665]: time="2025-05-16T00:53:21.630981094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.631050 containerd[2665]: time="2025-05-16T00:53:21.631021493Z" level=info msg="RemovePodSandbox \"5dc4afc0769898caa8b087f355a1418bb760cb2da33a0b621392f6e5265176c1\" returns successfully" May 16 00:53:21.631217 containerd[2665]: time="2025-05-16T00:53:21.631203130Z" level=info msg="StopPodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\"" May 16 00:53:21.631287 containerd[2665]: time="2025-05-16T00:53:21.631276368Z" level=info msg="TearDown network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" successfully" May 16 00:53:21.631308 containerd[2665]: time="2025-05-16T00:53:21.631287248Z" level=info msg="StopPodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" returns successfully" May 16 00:53:21.631457 containerd[2665]: time="2025-05-16T00:53:21.631442845Z" level=info msg="RemovePodSandbox for \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\"" May 16 00:53:21.631480 containerd[2665]: time="2025-05-16T00:53:21.631462965Z" level=info msg="Forcibly stopping sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\"" May 16 00:53:21.631532 containerd[2665]: time="2025-05-16T00:53:21.631522644Z" level=info msg="TearDown network for sandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" successfully" May 16 00:53:21.632752 containerd[2665]: time="2025-05-16T00:53:21.632724261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.632779 containerd[2665]: time="2025-05-16T00:53:21.632771940Z" level=info msg="RemovePodSandbox \"34d07f15edd3d5005ca53d5799ada5a58718e432e89698be8c9ea85648bacfdb\" returns successfully" May 16 00:53:21.633003 containerd[2665]: time="2025-05-16T00:53:21.632985256Z" level=info msg="StopPodSandbox for \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\"" May 16 00:53:21.633067 containerd[2665]: time="2025-05-16T00:53:21.633054694Z" level=info msg="TearDown network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\" successfully" May 16 00:53:21.633092 containerd[2665]: time="2025-05-16T00:53:21.633065494Z" level=info msg="StopPodSandbox for \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\" returns successfully" May 16 00:53:21.633250 containerd[2665]: time="2025-05-16T00:53:21.633233171Z" level=info msg="RemovePodSandbox for \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\"" May 16 00:53:21.633272 containerd[2665]: time="2025-05-16T00:53:21.633253931Z" level=info msg="Forcibly stopping sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\"" May 16 00:53:21.633326 containerd[2665]: time="2025-05-16T00:53:21.633314289Z" level=info msg="TearDown network for sandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\" successfully" May 16 00:53:21.634572 containerd[2665]: time="2025-05-16T00:53:21.634541306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.634621 containerd[2665]: time="2025-05-16T00:53:21.634601745Z" level=info msg="RemovePodSandbox \"2648463af885333da27ed58829c19bdb274fc38876f5593c08492097ffc1d522\" returns successfully" May 16 00:53:21.634831 containerd[2665]: time="2025-05-16T00:53:21.634811821Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:53:21.634897 containerd[2665]: time="2025-05-16T00:53:21.634886779Z" level=info msg="TearDown network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" successfully" May 16 00:53:21.634918 containerd[2665]: time="2025-05-16T00:53:21.634896819Z" level=info msg="StopPodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" returns successfully" May 16 00:53:21.635079 containerd[2665]: time="2025-05-16T00:53:21.635064816Z" level=info msg="RemovePodSandbox for \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:53:21.635102 containerd[2665]: time="2025-05-16T00:53:21.635086056Z" level=info msg="Forcibly stopping sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\"" May 16 00:53:21.635160 containerd[2665]: time="2025-05-16T00:53:21.635149974Z" level=info msg="TearDown network for sandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" successfully" May 16 00:53:21.636396 containerd[2665]: time="2025-05-16T00:53:21.636376071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.636432 containerd[2665]: time="2025-05-16T00:53:21.636422910Z" level=info msg="RemovePodSandbox \"1fb9f8dc6df111b67e601b1761764663e77121d1f973fb1d04eb94b933247563\" returns successfully" May 16 00:53:21.636624 containerd[2665]: time="2025-05-16T00:53:21.636609867Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" May 16 00:53:21.636694 containerd[2665]: time="2025-05-16T00:53:21.636683465Z" level=info msg="TearDown network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" successfully" May 16 00:53:21.636718 containerd[2665]: time="2025-05-16T00:53:21.636693625Z" level=info msg="StopPodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" returns successfully" May 16 00:53:21.636867 containerd[2665]: time="2025-05-16T00:53:21.636851862Z" level=info msg="RemovePodSandbox for \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" May 16 00:53:21.636893 containerd[2665]: time="2025-05-16T00:53:21.636872262Z" level=info msg="Forcibly stopping sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\"" May 16 00:53:21.636934 containerd[2665]: time="2025-05-16T00:53:21.636925061Z" level=info msg="TearDown network for sandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" successfully" May 16 00:53:21.652244 containerd[2665]: time="2025-05-16T00:53:21.652211769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.652280 containerd[2665]: time="2025-05-16T00:53:21.652262488Z" level=info msg="RemovePodSandbox \"38ed1da29bd6d8593c9d3d88b5b58c7a0cd49640eafeae4e712fc5377891e629\" returns successfully" May 16 00:53:21.652504 containerd[2665]: time="2025-05-16T00:53:21.652486124Z" level=info msg="StopPodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\"" May 16 00:53:21.652576 containerd[2665]: time="2025-05-16T00:53:21.652563803Z" level=info msg="TearDown network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" successfully" May 16 00:53:21.652576 containerd[2665]: time="2025-05-16T00:53:21.652573682Z" level=info msg="StopPodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" returns successfully" May 16 00:53:21.652833 containerd[2665]: time="2025-05-16T00:53:21.652812118Z" level=info msg="RemovePodSandbox for \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\"" May 16 00:53:21.652857 containerd[2665]: time="2025-05-16T00:53:21.652836077Z" level=info msg="Forcibly stopping sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\"" May 16 00:53:21.653035 containerd[2665]: time="2025-05-16T00:53:21.652925676Z" level=info msg="TearDown network for sandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" successfully" May 16 00:53:21.655625 containerd[2665]: time="2025-05-16T00:53:21.655599945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.655654 containerd[2665]: time="2025-05-16T00:53:21.655644024Z" level=info msg="RemovePodSandbox \"5e13367b44756674f951771efc6a562d77cc0a1317d8b4250cf6336ebb2973c9\" returns successfully" May 16 00:53:21.655880 containerd[2665]: time="2025-05-16T00:53:21.655860660Z" level=info msg="StopPodSandbox for \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\"" May 16 00:53:21.655952 containerd[2665]: time="2025-05-16T00:53:21.655939098Z" level=info msg="TearDown network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\" successfully" May 16 00:53:21.655952 containerd[2665]: time="2025-05-16T00:53:21.655949978Z" level=info msg="StopPodSandbox for \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\" returns successfully" May 16 00:53:21.656169 containerd[2665]: time="2025-05-16T00:53:21.656150374Z" level=info msg="RemovePodSandbox for \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\"" May 16 00:53:21.656193 containerd[2665]: time="2025-05-16T00:53:21.656175734Z" level=info msg="Forcibly stopping sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\"" May 16 00:53:21.656255 containerd[2665]: time="2025-05-16T00:53:21.656245452Z" level=info msg="TearDown network for sandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\" successfully" May 16 00:53:21.657523 containerd[2665]: time="2025-05-16T00:53:21.657501109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.657552 containerd[2665]: time="2025-05-16T00:53:21.657544948Z" level=info msg="RemovePodSandbox \"0c9dd1811fe6721d32de7448f0462e36b8454629ad1e93402f4ad8d11383fd4c\" returns successfully" May 16 00:53:21.657772 containerd[2665]: time="2025-05-16T00:53:21.657757664Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:53:21.657842 containerd[2665]: time="2025-05-16T00:53:21.657831862Z" level=info msg="TearDown network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" successfully" May 16 00:53:21.657863 containerd[2665]: time="2025-05-16T00:53:21.657841862Z" level=info msg="StopPodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" returns successfully" May 16 00:53:21.658051 containerd[2665]: time="2025-05-16T00:53:21.658031578Z" level=info msg="RemovePodSandbox for \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:53:21.658071 containerd[2665]: time="2025-05-16T00:53:21.658056738Z" level=info msg="Forcibly stopping sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\"" May 16 00:53:21.658124 containerd[2665]: time="2025-05-16T00:53:21.658112857Z" level=info msg="TearDown network for sandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" successfully" May 16 00:53:21.659381 containerd[2665]: time="2025-05-16T00:53:21.659355873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.659422 containerd[2665]: time="2025-05-16T00:53:21.659398912Z" level=info msg="RemovePodSandbox \"6e6b087f3c35719ae8c62816bcfcdc40e93866d62e9d8084469ee967db80ec4d\" returns successfully" May 16 00:53:21.659660 containerd[2665]: time="2025-05-16T00:53:21.659637668Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" May 16 00:53:21.659749 containerd[2665]: time="2025-05-16T00:53:21.659733626Z" level=info msg="TearDown network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" successfully" May 16 00:53:21.659773 containerd[2665]: time="2025-05-16T00:53:21.659749346Z" level=info msg="StopPodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" returns successfully" May 16 00:53:21.659964 containerd[2665]: time="2025-05-16T00:53:21.659949102Z" level=info msg="RemovePodSandbox for \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" May 16 00:53:21.659985 containerd[2665]: time="2025-05-16T00:53:21.659970181Z" level=info msg="Forcibly stopping sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\"" May 16 00:53:21.660042 containerd[2665]: time="2025-05-16T00:53:21.660033140Z" level=info msg="TearDown network for sandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" successfully" May 16 00:53:21.661265 containerd[2665]: time="2025-05-16T00:53:21.661240877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.661297 containerd[2665]: time="2025-05-16T00:53:21.661284916Z" level=info msg="RemovePodSandbox \"c4376dc621d1596c7791bc6cc33632d77f782c718510dfe996a998b189e53092\" returns successfully" May 16 00:53:21.661528 containerd[2665]: time="2025-05-16T00:53:21.661509872Z" level=info msg="StopPodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\"" May 16 00:53:21.661611 containerd[2665]: time="2025-05-16T00:53:21.661596071Z" level=info msg="TearDown network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" successfully" May 16 00:53:21.661633 containerd[2665]: time="2025-05-16T00:53:21.661608590Z" level=info msg="StopPodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" returns successfully" May 16 00:53:21.661798 containerd[2665]: time="2025-05-16T00:53:21.661783067Z" level=info msg="RemovePodSandbox for \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\"" May 16 00:53:21.661823 containerd[2665]: time="2025-05-16T00:53:21.661801747Z" level=info msg="Forcibly stopping sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\"" May 16 00:53:21.661863 containerd[2665]: time="2025-05-16T00:53:21.661851546Z" level=info msg="TearDown network for sandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" successfully" May 16 00:53:21.663377 containerd[2665]: time="2025-05-16T00:53:21.663355317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.663416 containerd[2665]: time="2025-05-16T00:53:21.663406036Z" level=info msg="RemovePodSandbox \"fcd463439d5c16abcbdc36a3b574332e38799462b42a8937c8721e7a71d5d8e4\" returns successfully" May 16 00:53:21.663614 containerd[2665]: time="2025-05-16T00:53:21.663599392Z" level=info msg="StopPodSandbox for \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\"" May 16 00:53:21.663678 containerd[2665]: time="2025-05-16T00:53:21.663667471Z" level=info msg="TearDown network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\" successfully" May 16 00:53:21.663701 containerd[2665]: time="2025-05-16T00:53:21.663678631Z" level=info msg="StopPodSandbox for \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\" returns successfully" May 16 00:53:21.663865 containerd[2665]: time="2025-05-16T00:53:21.663850628Z" level=info msg="RemovePodSandbox for \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\"" May 16 00:53:21.663886 containerd[2665]: time="2025-05-16T00:53:21.663869787Z" level=info msg="Forcibly stopping sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\"" May 16 00:53:21.663937 containerd[2665]: time="2025-05-16T00:53:21.663927146Z" level=info msg="TearDown network for sandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\" successfully" May 16 00:53:21.665261 containerd[2665]: time="2025-05-16T00:53:21.665239241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.665303 containerd[2665]: time="2025-05-16T00:53:21.665285200Z" level=info msg="RemovePodSandbox \"ad6bb086b4832fab4893f3b30e87b859ca158b69ad30db33b307877df2ac616f\" returns successfully" May 16 00:53:21.665526 containerd[2665]: time="2025-05-16T00:53:21.665511596Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:53:21.665600 containerd[2665]: time="2025-05-16T00:53:21.665589514Z" level=info msg="TearDown network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" successfully" May 16 00:53:21.665624 containerd[2665]: time="2025-05-16T00:53:21.665600154Z" level=info msg="StopPodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" returns successfully" May 16 00:53:21.665885 containerd[2665]: time="2025-05-16T00:53:21.665868629Z" level=info msg="RemovePodSandbox for \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:53:21.665909 containerd[2665]: time="2025-05-16T00:53:21.665891349Z" level=info msg="Forcibly stopping sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\"" May 16 00:53:21.665957 containerd[2665]: time="2025-05-16T00:53:21.665947908Z" level=info msg="TearDown network for sandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" successfully" May 16 00:53:21.667234 containerd[2665]: time="2025-05-16T00:53:21.667201324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.667270 containerd[2665]: time="2025-05-16T00:53:21.667257643Z" level=info msg="RemovePodSandbox \"7e18196f6cc93a2c35b58ecda8e1c5229be5591ba609e4ea681d35ce64a04386\" returns successfully" May 16 00:53:21.667471 containerd[2665]: time="2025-05-16T00:53:21.667452799Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" May 16 00:53:21.667548 containerd[2665]: time="2025-05-16T00:53:21.667533717Z" level=info msg="TearDown network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" successfully" May 16 00:53:21.667573 containerd[2665]: time="2025-05-16T00:53:21.667546117Z" level=info msg="StopPodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" returns successfully" May 16 00:53:21.667773 containerd[2665]: time="2025-05-16T00:53:21.667756073Z" level=info msg="RemovePodSandbox for \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" May 16 00:53:21.667799 containerd[2665]: time="2025-05-16T00:53:21.667777633Z" level=info msg="Forcibly stopping sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\"" May 16 00:53:21.667855 containerd[2665]: time="2025-05-16T00:53:21.667842711Z" level=info msg="TearDown network for sandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" successfully" May 16 00:53:21.669075 containerd[2665]: time="2025-05-16T00:53:21.669053088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.669108 containerd[2665]: time="2025-05-16T00:53:21.669098608Z" level=info msg="RemovePodSandbox \"31244ef91fee29f3efcc53e2016236f862f2669b9ec9eda6e64bcf50e44cb191\" returns successfully" May 16 00:53:21.669311 containerd[2665]: time="2025-05-16T00:53:21.669295444Z" level=info msg="StopPodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\"" May 16 00:53:21.669386 containerd[2665]: time="2025-05-16T00:53:21.669375322Z" level=info msg="TearDown network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" successfully" May 16 00:53:21.669407 containerd[2665]: time="2025-05-16T00:53:21.669386162Z" level=info msg="StopPodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" returns successfully" May 16 00:53:21.669571 containerd[2665]: time="2025-05-16T00:53:21.669557559Z" level=info msg="RemovePodSandbox for \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\"" May 16 00:53:21.669594 containerd[2665]: time="2025-05-16T00:53:21.669578478Z" level=info msg="Forcibly stopping sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\"" May 16 00:53:21.669643 containerd[2665]: time="2025-05-16T00:53:21.669633717Z" level=info msg="TearDown network for sandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" successfully" May 16 00:53:21.670958 containerd[2665]: time="2025-05-16T00:53:21.670937013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.671020 containerd[2665]: time="2025-05-16T00:53:21.670982092Z" level=info msg="RemovePodSandbox \"8d94075058e36b527b961da2b1b33ec793718e33cc41e6313b60209ac048338f\" returns successfully" May 16 00:53:21.671211 containerd[2665]: time="2025-05-16T00:53:21.671195968Z" level=info msg="StopPodSandbox for \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\"" May 16 00:53:21.671272 containerd[2665]: time="2025-05-16T00:53:21.671261646Z" level=info msg="TearDown network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\" successfully" May 16 00:53:21.671295 containerd[2665]: time="2025-05-16T00:53:21.671272726Z" level=info msg="StopPodSandbox for \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\" returns successfully" May 16 00:53:21.671446 containerd[2665]: time="2025-05-16T00:53:21.671432843Z" level=info msg="RemovePodSandbox for \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\"" May 16 00:53:21.671467 containerd[2665]: time="2025-05-16T00:53:21.671452443Z" level=info msg="Forcibly stopping sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\"" May 16 00:53:21.671525 containerd[2665]: time="2025-05-16T00:53:21.671514402Z" level=info msg="TearDown network for sandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\" successfully" May 16 00:53:21.672757 containerd[2665]: time="2025-05-16T00:53:21.672723578Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:53:21.672802 containerd[2665]: time="2025-05-16T00:53:21.672776377Z" level=info msg="RemovePodSandbox \"1f51a4763904321d2db314a03c384dad3d78a0025b8676985012a8db73868e2a\" returns successfully" May 16 00:53:29.607869 containerd[2665]: time="2025-05-16T00:53:29.607806999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:53:29.628636 containerd[2665]: time="2025-05-16T00:53:29.628566843Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:53:29.628841 containerd[2665]: time="2025-05-16T00:53:29.628822360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:53:29.628912 containerd[2665]: time="2025-05-16T00:53:29.628875119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:53:29.629048 kubelet[4137]: E0516 00:53:29.629001 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:53:29.629334 kubelet[4137]: E0516 00:53:29.629062 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:53:29.629334 kubelet[4137]: E0516 00:53:29.629296 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns2w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:53:29.629432 containerd[2665]: time="2025-05-16T00:53:29.629344154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:53:29.630451 kubelet[4137]: E0516 00:53:29.630432 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:53:29.656058 containerd[2665]: time="2025-05-16T00:53:29.656022211Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:53:29.665738 containerd[2665]: time="2025-05-16T00:53:29.665713741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:53:29.665807 containerd[2665]: time="2025-05-16T00:53:29.665731140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:53:29.665856 kubelet[4137]: E0516 00:53:29.665831 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:53:29.665903 kubelet[4137]: E0516 00:53:29.665858 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:53:29.665962 kubelet[4137]: E0516 00:53:29.665930 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8533133c9db34d9e9fc5ed2e01534719,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:53:29.667622 containerd[2665]: time="2025-05-16T00:53:29.667576999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:53:29.690463 containerd[2665]: time="2025-05-16T00:53:29.690419860Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:53:29.690698 containerd[2665]: time="2025-05-16T00:53:29.690674057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:53:29.690757 containerd[2665]: time="2025-05-16T00:53:29.690722096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:53:29.690850 kubelet[4137]: E0516 00:53:29.690817 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:53:29.690901 kubelet[4137]: E0516 00:53:29.690856 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:53:29.690970 kubelet[4137]: E0516 00:53:29.690933 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:53:29.692104 kubelet[4137]: E0516 00:53:29.692074 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:53:43.607523 kubelet[4137]: E0516 00:53:43.607470 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:53:43.608061 kubelet[4137]: E0516 00:53:43.607764 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:53:54.607649 kubelet[4137]: E0516 00:53:54.607592 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:53:55.607034 kubelet[4137]: E0516 00:53:55.607004 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:54:05.607498 kubelet[4137]: E0516 00:54:05.607449 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:54:10.607715 containerd[2665]: time="2025-05-16T00:54:10.607602304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:54:10.645961 containerd[2665]: time="2025-05-16T00:54:10.645913552Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:54:10.646358 containerd[2665]: time="2025-05-16T00:54:10.646274552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:54:10.646358 containerd[2665]: time="2025-05-16T00:54:10.646317997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:54:10.646475 kubelet[4137]: E0516 00:54:10.646430 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:54:10.646732 kubelet[4137]: E0516 00:54:10.646486 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:54:10.646897 kubelet[4137]: E0516 00:54:10.646648 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns2w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:54:10.648614 kubelet[4137]: E0516 00:54:10.648252 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:54:16.607628 containerd[2665]: time="2025-05-16T00:54:16.607589139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:54:16.645069 containerd[2665]: time="2025-05-16T00:54:16.645000835Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:54:16.645394 containerd[2665]: time="2025-05-16T00:54:16.645357950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:54:16.645459 containerd[2665]: time="2025-05-16T00:54:16.645404514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:54:16.645674 kubelet[4137]: E0516 00:54:16.645614 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:54:16.646051 kubelet[4137]: E0516 00:54:16.645695 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:54:16.646051 kubelet[4137]: E0516 00:54:16.645870 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8533133c9db34d9e9fc5ed2e01534719,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:54:16.647717 containerd[2665]: time="2025-05-16T00:54:16.647700739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:54:16.679066 containerd[2665]: time="2025-05-16T00:54:16.679022960Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:54:16.679269 containerd[2665]: time="2025-05-16T00:54:16.679245862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:54:16.679334 containerd[2665]: time="2025-05-16T00:54:16.679298267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:54:16.679413 kubelet[4137]: E0516 00:54:16.679377 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:54:16.679454 kubelet[4137]: E0516 00:54:16.679423 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:54:16.679609 kubelet[4137]: E0516 00:54:16.679551 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:54:16.680745 kubelet[4137]: E0516 00:54:16.680713 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:54:25.607642 kubelet[4137]: E0516 00:54:25.607586 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:54:29.607655 kubelet[4137]: E0516 00:54:29.607593 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:54:36.607571 kubelet[4137]: E0516 00:54:36.607511 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:54:39.599307 systemd[1]: Started sshd@7-147.28.151.230:22-61.170.196.19:41818.service - OpenSSH per-connection server daemon (61.170.196.19:41818). May 16 00:54:41.608269 kubelet[4137]: E0516 00:54:41.608224 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:54:42.097812 sshd[9113]: Invalid user centos from 61.170.196.19 port 41818 May 16 00:54:42.553586 sshd-session[9136]: pam_faillock(sshd:auth): User unknown May 16 00:54:42.559790 sshd[9113]: Postponed keyboard-interactive for invalid user centos from 61.170.196.19 port 41818 ssh2 [preauth] May 16 00:54:43.052349 sshd-session[9136]: pam_unix(sshd:auth): check pass; user unknown May 16 00:54:43.052372 sshd-session[9136]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.170.196.19 May 16 00:54:43.052647 sshd-session[9136]: pam_faillock(sshd:auth): User unknown May 16 00:54:44.809395 sshd[9113]: PAM: Permission denied for illegal user centos from 61.170.196.19 May 16 00:54:44.809797 sshd[9113]: Failed keyboard-interactive/pam for invalid user centos from 61.170.196.19 port 41818 ssh2 May 16 00:54:45.396672 sshd[9113]: Connection closed by invalid user centos 61.170.196.19 port 41818 [preauth] May 16 00:54:45.399407 systemd[1]: sshd@7-147.28.151.230:22-61.170.196.19:41818.service: Deactivated successfully. May 16 00:54:48.607723 kubelet[4137]: E0516 00:54:48.607670 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:54:54.607979 kubelet[4137]: E0516 00:54:54.607914 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:55:02.606840 kubelet[4137]: E0516 00:55:02.606795 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:55:06.608016 kubelet[4137]: E0516 00:55:06.607956 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:55:08.674128 systemd[1]: Started sshd@8-147.28.151.230:22-123.57.65.198:44886.service - OpenSSH per-connection server daemon (123.57.65.198:44886). May 16 00:55:08.916649 sshd[9231]: Unable to negotiate with 123.57.65.198 port 44886: no matching key exchange method found. Their offer: diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1 [preauth] May 16 00:55:08.918255 systemd[1]: sshd@8-147.28.151.230:22-123.57.65.198:44886.service: Deactivated successfully. May 16 00:55:16.607325 kubelet[4137]: E0516 00:55:16.607182 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:55:20.550823 update_engine[2659]: I20250516 00:55:20.550765 2659 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 16 00:55:20.550823 update_engine[2659]: I20250516 00:55:20.550818 2659 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 16 00:55:20.551221 update_engine[2659]: I20250516 00:55:20.551050 2659 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 16 00:55:20.551394 update_engine[2659]: I20250516 00:55:20.551378 2659 omaha_request_params.cc:62] Current group set to stable May 16 00:55:20.551464 update_engine[2659]: I20250516 00:55:20.551453 2659 update_attempter.cc:499] Already updated boot flags. Skipping. May 16 00:55:20.551487 update_engine[2659]: I20250516 00:55:20.551462 2659 update_attempter.cc:643] Scheduling an action processor start. May 16 00:55:20.551487 update_engine[2659]: I20250516 00:55:20.551476 2659 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 00:55:20.551524 update_engine[2659]: I20250516 00:55:20.551503 2659 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 16 00:55:20.551559 update_engine[2659]: I20250516 00:55:20.551548 2659 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 00:55:20.551583 update_engine[2659]: I20250516 00:55:20.551558 2659 omaha_request_action.cc:272] Request: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: May 16 00:55:20.551583 update_engine[2659]: I20250516 00:55:20.551563 2659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 00:55:20.551770 locksmithd[2691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 16 00:55:20.552468 update_engine[2659]: I20250516 00:55:20.552451 2659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 00:55:20.552727 update_engine[2659]: I20250516 00:55:20.552708 2659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 00:55:20.553200 update_engine[2659]: E20250516 00:55:20.553184 2659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 00:55:20.553244 update_engine[2659]: I20250516 00:55:20.553233 2659 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 16 00:55:20.607764 kubelet[4137]: E0516 00:55:20.607714 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:55:30.488578 update_engine[2659]: I20250516 00:55:30.488214 2659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 00:55:30.488578 update_engine[2659]: I20250516 00:55:30.488467 2659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 00:55:30.489047 update_engine[2659]: I20250516 00:55:30.488657 2659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 00:55:30.489159 update_engine[2659]: E20250516 00:55:30.489141 2659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 00:55:30.489186 update_engine[2659]: I20250516 00:55:30.489176 2659 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 16 00:55:31.607265 containerd[2665]: time="2025-05-16T00:55:31.607228786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:55:31.607632 kubelet[4137]: E0516 00:55:31.607385 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:55:31.636718 containerd[2665]: time="2025-05-16T00:55:31.636677696Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:55:31.636964 containerd[2665]: time="2025-05-16T00:55:31.636943266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:55:31.637027 containerd[2665]: time="2025-05-16T00:55:31.636994028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:55:31.637150 kubelet[4137]: E0516 00:55:31.637109 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:55:31.637218 kubelet[4137]: E0516 00:55:31.637164 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:55:31.637392 kubelet[4137]: E0516 00:55:31.637327 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns2w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:55:31.638524 kubelet[4137]: E0516 00:55:31.638493 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:55:40.488312 update_engine[2659]: I20250516 00:55:40.488245 2659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 00:55:40.488774 update_engine[2659]: I20250516 00:55:40.488497 2659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 00:55:40.488774 update_engine[2659]: I20250516 00:55:40.488690 2659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 00:55:40.489185 update_engine[2659]: E20250516 00:55:40.489167 2659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 00:55:40.489215 update_engine[2659]: I20250516 00:55:40.489205 2659 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 16 00:55:43.606839 kubelet[4137]: E0516 00:55:43.606790 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:55:46.607915 containerd[2665]: time="2025-05-16T00:55:46.607824510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:55:46.631055 containerd[2665]: time="2025-05-16T00:55:46.631018317Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:55:46.631285 containerd[2665]: time="2025-05-16T00:55:46.631261717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:55:46.631337 containerd[2665]: time="2025-05-16T00:55:46.631303230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:55:46.631473 kubelet[4137]: E0516 00:55:46.631428 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:55:46.631725 kubelet[4137]: E0516 00:55:46.631491 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:55:46.631725 kubelet[4137]: E0516 00:55:46.631591 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8533133c9db34d9e9fc5ed2e01534719,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:55:46.633198 containerd[2665]: time="2025-05-16T00:55:46.633182560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:55:46.655341 containerd[2665]: time="2025-05-16T00:55:46.655309423Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:55:46.655530 containerd[2665]: time="2025-05-16T00:55:46.655510630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:55:46.655600 containerd[2665]: time="2025-05-16T00:55:46.655577659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:55:46.655669 kubelet[4137]: E0516 00:55:46.655635 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:55:46.655721 kubelet[4137]: E0516 00:55:46.655678 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:55:46.655853 kubelet[4137]: E0516 00:55:46.655796 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:55:46.656985 kubelet[4137]: E0516 00:55:46.656956 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:55:50.488101 update_engine[2659]: I20250516 00:55:50.488052 2659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 00:55:50.488441 update_engine[2659]: I20250516 00:55:50.488324 2659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 00:55:50.488544 update_engine[2659]: I20250516 00:55:50.488523 2659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 00:55:50.489047 update_engine[2659]: E20250516 00:55:50.489032 2659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 00:55:50.489074 update_engine[2659]: I20250516 00:55:50.489063 2659 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 16 00:55:50.489099 update_engine[2659]: I20250516 00:55:50.489073 2659 omaha_request_action.cc:617] Omaha request response: May 16 00:55:50.489155 update_engine[2659]: E20250516 00:55:50.489142 2659 omaha_request_action.cc:636] Omaha request network transfer failed. May 16 00:55:50.489178 update_engine[2659]: I20250516 00:55:50.489161 2659 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 16 00:55:50.489178 update_engine[2659]: I20250516 00:55:50.489166 2659 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 00:55:50.489178 update_engine[2659]: I20250516 00:55:50.489171 2659 update_attempter.cc:306] Processing Done. May 16 00:55:50.489236 update_engine[2659]: E20250516 00:55:50.489182 2659 update_attempter.cc:619] Update failed. May 16 00:55:50.489236 update_engine[2659]: I20250516 00:55:50.489188 2659 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 16 00:55:50.489236 update_engine[2659]: I20250516 00:55:50.489192 2659 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 16 00:55:50.489236 update_engine[2659]: I20250516 00:55:50.489197 2659 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 16 00:55:50.489307 update_engine[2659]: I20250516 00:55:50.489252 2659 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 00:55:50.489307 update_engine[2659]: I20250516 00:55:50.489272 2659 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 00:55:50.489307 update_engine[2659]: I20250516 00:55:50.489277 2659 omaha_request_action.cc:272] Request: May 16 00:55:50.489307 update_engine[2659]: May 16 00:55:50.489307 update_engine[2659]: May 16 00:55:50.489307 update_engine[2659]: May 16 00:55:50.489307 update_engine[2659]: May 16 00:55:50.489307 update_engine[2659]: May 16 00:55:50.489307 update_engine[2659]: May 16 00:55:50.489307 update_engine[2659]: I20250516 00:55:50.489282 2659 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 00:55:50.489476 update_engine[2659]: I20250516 00:55:50.489387 2659 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 00:55:50.489508 locksmithd[2691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 16 00:55:50.489681 update_engine[2659]: I20250516 00:55:50.489526 2659 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 00:55:50.489925 update_engine[2659]: E20250516 00:55:50.489909 2659 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 00:55:50.489953 update_engine[2659]: I20250516 00:55:50.489943 2659 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 16 00:55:50.489953 update_engine[2659]: I20250516 00:55:50.489949 2659 omaha_request_action.cc:617] Omaha request response: May 16 00:55:50.489993 update_engine[2659]: I20250516 00:55:50.489955 2659 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 00:55:50.489993 update_engine[2659]: I20250516 00:55:50.489960 2659 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 00:55:50.489993 update_engine[2659]: I20250516 00:55:50.489964 2659 update_attempter.cc:306] Processing Done. May 16 00:55:50.489993 update_engine[2659]: I20250516 00:55:50.489969 2659 update_attempter.cc:310] Error event sent. May 16 00:55:50.489993 update_engine[2659]: I20250516 00:55:50.489975 2659 update_check_scheduler.cc:74] Next update check in 47m9s May 16 00:55:50.490112 locksmithd[2691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 16 00:55:58.606994 kubelet[4137]: E0516 00:55:58.606951 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:56:01.608568 kubelet[4137]: E0516 00:56:01.608421 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:56:13.607246 kubelet[4137]: E0516 00:56:13.607190 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:56:14.607025 kubelet[4137]: E0516 00:56:14.606957 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:56:24.607338 kubelet[4137]: E0516 00:56:24.607276 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:56:25.607373 kubelet[4137]: E0516 00:56:25.607336 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:56:37.607563 kubelet[4137]: E0516 00:56:37.607511 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:56:37.608063 kubelet[4137]: E0516 00:56:37.607859 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:56:48.607010 kubelet[4137]: E0516 00:56:48.606939 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:56:48.607667 kubelet[4137]: E0516 00:56:48.607262 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:57:00.607859 kubelet[4137]: E0516 00:57:00.607803 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:57:03.607665 kubelet[4137]: E0516 00:57:03.607623 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:57:13.608180 kubelet[4137]: E0516 00:57:13.608108 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:57:15.606928 kubelet[4137]: E0516 00:57:15.606896 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:57:26.607459 kubelet[4137]: E0516 00:57:26.607401 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:57:28.607157 kubelet[4137]: E0516 00:57:28.607093 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:57:38.607282 kubelet[4137]: E0516 00:57:38.607232 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:57:40.607964 kubelet[4137]: E0516 00:57:40.607906 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:57:51.607720 kubelet[4137]: E0516 00:57:51.607515 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:57:55.607508 kubelet[4137]: E0516 00:57:55.607459 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:58:04.607238 kubelet[4137]: E0516 00:58:04.607186 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:58:06.135238 systemd[1]: Started sshd@9-147.28.151.230:22-60.21.134.178:60120.service - OpenSSH per-connection server daemon (60.21.134.178:60120). May 16 00:58:07.174003 sshd[9729]: banner exchange: Connection from 60.21.134.178 port 60120: invalid format May 16 00:58:07.175216 systemd[1]: sshd@9-147.28.151.230:22-60.21.134.178:60120.service: Deactivated successfully. May 16 00:58:07.607913 kubelet[4137]: E0516 00:58:07.607826 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:58:08.900311 systemd[1]: Started sshd@10-147.28.151.230:22-60.21.134.178:60586.service - OpenSSH per-connection server daemon (60.21.134.178:60586). May 16 00:58:16.607285 containerd[2665]: time="2025-05-16T00:58:16.607240792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:58:16.730897 containerd[2665]: time="2025-05-16T00:58:16.730853734Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:58:16.731135 containerd[2665]: time="2025-05-16T00:58:16.731107772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:58:16.731189 containerd[2665]: time="2025-05-16T00:58:16.731153572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:58:16.731310 kubelet[4137]: E0516 00:58:16.731263 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:58:16.731540 kubelet[4137]: E0516 00:58:16.731323 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:58:16.731540 kubelet[4137]: E0516 00:58:16.731456 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns2w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-hrh6s_calico-system(aed452f7-5298-4a6e-bf6b-b06604585632): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:58:16.732622 kubelet[4137]: E0516 00:58:16.732594 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:58:18.607786 kubelet[4137]: E0516 00:58:18.607736 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:58:29.607654 containerd[2665]: time="2025-05-16T00:58:29.607614097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 00:58:29.633782 containerd[2665]: time="2025-05-16T00:58:29.633717921Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:58:29.634136 containerd[2665]: time="2025-05-16T00:58:29.634110999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:58:29.634277 containerd[2665]: time="2025-05-16T00:58:29.634155399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 00:58:29.634408 kubelet[4137]: E0516 00:58:29.634356 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:58:29.634900 kubelet[4137]: E0516 00:58:29.634737 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 00:58:29.634900 kubelet[4137]: E0516 00:58:29.634857 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8533133c9db34d9e9fc5ed2e01534719,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:58:29.636538 containerd[2665]: time="2025-05-16T00:58:29.636494067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 00:58:29.696729 containerd[2665]: time="2025-05-16T00:58:29.696675433Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:58:29.697025 containerd[2665]: time="2025-05-16T00:58:29.696997791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:58:29.697094 containerd[2665]: time="2025-05-16T00:58:29.697051671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 00:58:29.697179 kubelet[4137]: E0516 00:58:29.697151 4137 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:58:29.697253 kubelet[4137]: E0516 00:58:29.697187 4137 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 00:58:29.697325 kubelet[4137]: E0516 00:58:29.697278 4137 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6q6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-849858999-52vcl_calico-system(4134b6f6-00a5-4952-b6e3-dd3becf66cc3): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:58:29.698455 kubelet[4137]: E0516 00:58:29.698423 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:58:32.607143 kubelet[4137]: E0516 00:58:32.607098 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:58:44.607994 kubelet[4137]: E0516 00:58:44.607945 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:58:44.632277 systemd[1]: Started sshd@11-147.28.151.230:22-60.21.134.178:44170.service - OpenSSH per-connection server daemon (60.21.134.178:44170). May 16 00:58:47.607142 kubelet[4137]: E0516 00:58:47.607103 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:58:54.149292 systemd[1]: Started sshd@12-147.28.151.230:22-60.21.134.178:47772.service - OpenSSH per-connection server daemon (60.21.134.178:47772). May 16 00:58:59.607423 kubelet[4137]: E0516 00:58:59.607378 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:59:01.607724 kubelet[4137]: E0516 00:59:01.607678 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:59:12.607254 kubelet[4137]: E0516 00:59:12.607198 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:59:14.608150 kubelet[4137]: E0516 00:59:14.608062 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:59:23.526153 systemd[1]: Started sshd@13-147.28.151.230:22-60.21.134.178:56422.service - OpenSSH per-connection server daemon (60.21.134.178:56422). May 16 00:59:26.607108 kubelet[4137]: E0516 00:59:26.607065 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:59:28.607974 kubelet[4137]: E0516 00:59:28.607931 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:59:38.607317 kubelet[4137]: E0516 00:59:38.607262 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:59:40.608103 kubelet[4137]: E0516 00:59:40.608054 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:59:50.607257 kubelet[4137]: E0516 00:59:50.607200 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 00:59:53.608038 kubelet[4137]: E0516 00:59:53.607982 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 00:59:59.550275 systemd[1]: Started sshd@14-147.28.151.230:22-60.21.134.178:21489.service - OpenSSH per-connection server daemon (60.21.134.178:21489). May 16 01:00:04.606723 kubelet[4137]: E0516 01:00:04.606670 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:00:06.607199 kubelet[4137]: E0516 01:00:06.607154 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:00:08.920710 systemd[1]: sshd@10-147.28.151.230:22-60.21.134.178:60586.service: Deactivated successfully. May 16 01:00:17.607658 kubelet[4137]: E0516 01:00:17.607609 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:00:21.607906 kubelet[4137]: E0516 01:00:21.607847 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:00:30.607539 kubelet[4137]: E0516 01:00:30.607480 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:00:33.607298 kubelet[4137]: E0516 01:00:33.607241 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:00:35.593198 systemd[1]: Started sshd@15-147.28.151.230:22-60.21.134.178:18521.service - OpenSSH per-connection server daemon (60.21.134.178:18521). May 16 01:00:42.606909 kubelet[4137]: E0516 01:00:42.606856 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:00:44.651966 systemd[1]: sshd@11-147.28.151.230:22-60.21.134.178:44170.service: Deactivated successfully. May 16 01:00:48.607643 kubelet[4137]: E0516 01:00:48.607589 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:00:50.383189 systemd[1]: Started sshd@16-147.28.151.230:22-139.178.89.65:38052.service - OpenSSH per-connection server daemon (139.178.89.65:38052). May 16 01:00:50.817548 sshd[10174]: Accepted publickey for core from 139.178.89.65 port 38052 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:00:50.818544 sshd-session[10174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:00:50.822356 systemd-logind[2644]: New session 10 of user core. May 16 01:00:50.831904 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 01:00:51.182527 sshd[10176]: Connection closed by 139.178.89.65 port 38052 May 16 01:00:51.182900 sshd-session[10174]: pam_unix(sshd:session): session closed for user core May 16 01:00:51.185751 systemd[1]: sshd@16-147.28.151.230:22-139.178.89.65:38052.service: Deactivated successfully. May 16 01:00:51.187988 systemd[1]: session-10.scope: Deactivated successfully. May 16 01:00:51.188477 systemd-logind[2644]: Session 10 logged out. Waiting for processes to exit. May 16 01:00:51.189116 systemd-logind[2644]: Removed session 10. May 16 01:00:54.169001 systemd[1]: sshd@12-147.28.151.230:22-60.21.134.178:47772.service: Deactivated successfully. May 16 01:00:55.607288 kubelet[4137]: E0516 01:00:55.607247 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:00:56.260259 systemd[1]: Started sshd@17-147.28.151.230:22-139.178.89.65:38056.service - OpenSSH per-connection server daemon (139.178.89.65:38056). May 16 01:00:56.693694 sshd[10246]: Accepted publickey for core from 139.178.89.65 port 38056 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:00:56.694886 sshd-session[10246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:00:56.698170 systemd-logind[2644]: New session 11 of user core. May 16 01:00:56.706896 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 01:00:57.052305 sshd[10248]: Connection closed by 139.178.89.65 port 38056 May 16 01:00:57.052627 sshd-session[10246]: pam_unix(sshd:session): session closed for user core May 16 01:00:57.055424 systemd[1]: sshd@17-147.28.151.230:22-139.178.89.65:38056.service: Deactivated successfully. May 16 01:00:57.057175 systemd[1]: session-11.scope: Deactivated successfully. May 16 01:00:57.057693 systemd-logind[2644]: Session 11 logged out. Waiting for processes to exit. May 16 01:00:57.058267 systemd-logind[2644]: Removed session 11. May 16 01:00:57.132335 systemd[1]: Started sshd@18-147.28.151.230:22-139.178.89.65:56928.service - OpenSSH per-connection server daemon (139.178.89.65:56928). May 16 01:00:57.557905 sshd[10282]: Accepted publickey for core from 139.178.89.65 port 56928 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:00:57.561159 sshd-session[10282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:00:57.564621 systemd-logind[2644]: New session 12 of user core. May 16 01:00:57.577929 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 01:00:57.935988 sshd[10284]: Connection closed by 139.178.89.65 port 56928 May 16 01:00:57.936399 sshd-session[10282]: pam_unix(sshd:session): session closed for user core May 16 01:00:57.939337 systemd[1]: sshd@18-147.28.151.230:22-139.178.89.65:56928.service: Deactivated successfully. May 16 01:00:57.940994 systemd[1]: session-12.scope: Deactivated successfully. May 16 01:00:57.941499 systemd-logind[2644]: Session 12 logged out. Waiting for processes to exit. May 16 01:00:57.942069 systemd-logind[2644]: Removed session 12. May 16 01:00:58.008101 systemd[1]: Started sshd@19-147.28.151.230:22-139.178.89.65:56934.service - OpenSSH per-connection server daemon (139.178.89.65:56934). May 16 01:00:58.431396 sshd[10322]: Accepted publickey for core from 139.178.89.65 port 56934 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:00:58.432579 sshd-session[10322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:00:58.435546 systemd-logind[2644]: New session 13 of user core. May 16 01:00:58.447911 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 01:00:58.784890 sshd[10324]: Connection closed by 139.178.89.65 port 56934 May 16 01:00:58.785179 sshd-session[10322]: pam_unix(sshd:session): session closed for user core May 16 01:00:58.787978 systemd[1]: sshd@19-147.28.151.230:22-139.178.89.65:56934.service: Deactivated successfully. May 16 01:00:58.789594 systemd[1]: session-13.scope: Deactivated successfully. May 16 01:00:58.790095 systemd-logind[2644]: Session 13 logged out. Waiting for processes to exit. May 16 01:00:58.790641 systemd-logind[2644]: Removed session 13. May 16 01:00:59.608291 kubelet[4137]: E0516 01:00:59.608229 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:01:03.859164 systemd[1]: Started sshd@20-147.28.151.230:22-139.178.89.65:56950.service - OpenSSH per-connection server daemon (139.178.89.65:56950). May 16 01:01:04.284423 sshd[10366]: Accepted publickey for core from 139.178.89.65 port 56950 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:04.285591 sshd-session[10366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:04.288809 systemd-logind[2644]: New session 14 of user core. May 16 01:01:04.303847 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 01:01:04.636069 sshd[10389]: Connection closed by 139.178.89.65 port 56950 May 16 01:01:04.636488 sshd-session[10366]: pam_unix(sshd:session): session closed for user core May 16 01:01:04.639360 systemd[1]: sshd@20-147.28.151.230:22-139.178.89.65:56950.service: Deactivated successfully. May 16 01:01:04.641029 systemd[1]: session-14.scope: Deactivated successfully. May 16 01:01:04.641507 systemd-logind[2644]: Session 14 logged out. Waiting for processes to exit. May 16 01:01:04.642076 systemd-logind[2644]: Removed session 14. May 16 01:01:04.707229 systemd[1]: Started sshd@21-147.28.151.230:22-139.178.89.65:56952.service - OpenSSH per-connection server daemon (139.178.89.65:56952). May 16 01:01:05.120355 sshd[10423]: Accepted publickey for core from 139.178.89.65 port 56952 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:05.121379 sshd-session[10423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:05.124171 systemd-logind[2644]: New session 15 of user core. May 16 01:01:05.138901 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 01:01:05.598647 sshd[10425]: Connection closed by 139.178.89.65 port 56952 May 16 01:01:05.598945 sshd-session[10423]: pam_unix(sshd:session): session closed for user core May 16 01:01:05.601674 systemd[1]: sshd@21-147.28.151.230:22-139.178.89.65:56952.service: Deactivated successfully. May 16 01:01:05.603316 systemd[1]: session-15.scope: Deactivated successfully. May 16 01:01:05.603827 systemd-logind[2644]: Session 15 logged out. Waiting for processes to exit. May 16 01:01:05.604343 systemd-logind[2644]: Removed session 15. May 16 01:01:05.675162 systemd[1]: Started sshd@22-147.28.151.230:22-139.178.89.65:56968.service - OpenSSH per-connection server daemon (139.178.89.65:56968). May 16 01:01:06.111312 sshd[10455]: Accepted publickey for core from 139.178.89.65 port 56968 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:06.112314 sshd-session[10455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:06.115126 systemd-logind[2644]: New session 16 of user core. May 16 01:01:06.129900 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 01:01:06.607090 kubelet[4137]: E0516 01:01:06.607044 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:01:06.836411 sshd[10457]: Connection closed by 139.178.89.65 port 56968 May 16 01:01:06.836814 sshd-session[10455]: pam_unix(sshd:session): session closed for user core May 16 01:01:06.839611 systemd[1]: sshd@22-147.28.151.230:22-139.178.89.65:56968.service: Deactivated successfully. May 16 01:01:06.841347 systemd[1]: session-16.scope: Deactivated successfully. May 16 01:01:06.841867 systemd-logind[2644]: Session 16 logged out. Waiting for processes to exit. May 16 01:01:06.842424 systemd-logind[2644]: Removed session 16. May 16 01:01:06.907049 systemd[1]: Started sshd@23-147.28.151.230:22-139.178.89.65:49794.service - OpenSSH per-connection server daemon (139.178.89.65:49794). May 16 01:01:07.329896 sshd[10518]: Accepted publickey for core from 139.178.89.65 port 49794 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:07.330903 sshd-session[10518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:07.333834 systemd-logind[2644]: New session 17 of user core. May 16 01:01:07.342894 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 01:01:07.769674 sshd[10520]: Connection closed by 139.178.89.65 port 49794 May 16 01:01:07.770081 sshd-session[10518]: pam_unix(sshd:session): session closed for user core May 16 01:01:07.772862 systemd[1]: sshd@23-147.28.151.230:22-139.178.89.65:49794.service: Deactivated successfully. May 16 01:01:07.774606 systemd[1]: session-17.scope: Deactivated successfully. May 16 01:01:07.775137 systemd-logind[2644]: Session 17 logged out. Waiting for processes to exit. May 16 01:01:07.775668 systemd-logind[2644]: Removed session 17. May 16 01:01:07.843082 systemd[1]: Started sshd@24-147.28.151.230:22-139.178.89.65:49802.service - OpenSSH per-connection server daemon (139.178.89.65:49802). May 16 01:01:08.268363 sshd[10567]: Accepted publickey for core from 139.178.89.65 port 49802 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:08.269440 sshd-session[10567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:08.272439 systemd-logind[2644]: New session 18 of user core. May 16 01:01:08.284842 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 01:01:08.620615 sshd[10569]: Connection closed by 139.178.89.65 port 49802 May 16 01:01:08.621041 sshd-session[10567]: pam_unix(sshd:session): session closed for user core May 16 01:01:08.623836 systemd[1]: sshd@24-147.28.151.230:22-139.178.89.65:49802.service: Deactivated successfully. May 16 01:01:08.626048 systemd[1]: session-18.scope: Deactivated successfully. May 16 01:01:08.626557 systemd-logind[2644]: Session 18 logged out. Waiting for processes to exit. May 16 01:01:08.627138 systemd-logind[2644]: Removed session 18. May 16 01:01:10.608007 kubelet[4137]: E0516 01:01:10.607954 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:01:11.589161 systemd[1]: Started sshd@25-147.28.151.230:22-60.21.134.178:57765.service - OpenSSH per-connection server daemon (60.21.134.178:57765). May 16 01:01:13.697142 systemd[1]: Started sshd@26-147.28.151.230:22-139.178.89.65:49810.service - OpenSSH per-connection server daemon (139.178.89.65:49810). May 16 01:01:14.121022 sshd[10613]: Accepted publickey for core from 139.178.89.65 port 49810 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:14.122161 sshd-session[10613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:14.125238 systemd-logind[2644]: New session 19 of user core. May 16 01:01:14.138845 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 01:01:14.473226 sshd[10615]: Connection closed by 139.178.89.65 port 49810 May 16 01:01:14.473558 sshd-session[10613]: pam_unix(sshd:session): session closed for user core May 16 01:01:14.476381 systemd[1]: sshd@26-147.28.151.230:22-139.178.89.65:49810.service: Deactivated successfully. May 16 01:01:14.478020 systemd[1]: session-19.scope: Deactivated successfully. May 16 01:01:14.478538 systemd-logind[2644]: Session 19 logged out. Waiting for processes to exit. May 16 01:01:14.479115 systemd-logind[2644]: Removed session 19. May 16 01:01:19.548087 systemd[1]: Started sshd@27-147.28.151.230:22-139.178.89.65:54184.service - OpenSSH per-connection server daemon (139.178.89.65:54184). May 16 01:01:19.972790 sshd[10654]: Accepted publickey for core from 139.178.89.65 port 54184 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:19.973775 sshd-session[10654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:19.976625 systemd-logind[2644]: New session 20 of user core. May 16 01:01:19.984840 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 01:01:20.325024 sshd[10657]: Connection closed by 139.178.89.65 port 54184 May 16 01:01:20.325380 sshd-session[10654]: pam_unix(sshd:session): session closed for user core May 16 01:01:20.328166 systemd[1]: sshd@27-147.28.151.230:22-139.178.89.65:54184.service: Deactivated successfully. May 16 01:01:20.329804 systemd[1]: session-20.scope: Deactivated successfully. May 16 01:01:20.330296 systemd-logind[2644]: Session 20 logged out. Waiting for processes to exit. May 16 01:01:20.330862 systemd-logind[2644]: Removed session 20. May 16 01:01:20.607469 kubelet[4137]: E0516 01:01:20.607221 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-hrh6s" podUID="aed452f7-5298-4a6e-bf6b-b06604585632" May 16 01:01:21.607729 kubelet[4137]: E0516 01:01:21.607686 4137 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-849858999-52vcl" podUID="4134b6f6-00a5-4952-b6e3-dd3becf66cc3" May 16 01:01:23.545686 systemd[1]: sshd@13-147.28.151.230:22-60.21.134.178:56422.service: Deactivated successfully. May 16 01:01:25.397238 systemd[1]: Started sshd@28-147.28.151.230:22-139.178.89.65:54190.service - OpenSSH per-connection server daemon (139.178.89.65:54190). May 16 01:01:25.820761 sshd[10728]: Accepted publickey for core from 139.178.89.65 port 54190 ssh2: RSA SHA256:pEa5rwH0Li7OhCUc/570PRIyijCqbVNjiw9B/OFxXsY May 16 01:01:25.821868 sshd-session[10728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 01:01:25.825003 systemd-logind[2644]: New session 21 of user core. May 16 01:01:25.833910 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 01:01:26.171171 sshd[10730]: Connection closed by 139.178.89.65 port 54190 May 16 01:01:26.171551 sshd-session[10728]: pam_unix(sshd:session): session closed for user core May 16 01:01:26.174280 systemd[1]: sshd@28-147.28.151.230:22-139.178.89.65:54190.service: Deactivated successfully. May 16 01:01:26.175900 systemd[1]: session-21.scope: Deactivated successfully. May 16 01:01:26.176423 systemd-logind[2644]: Session 21 logged out. Waiting for processes to exit. May 16 01:01:26.177002 systemd-logind[2644]: Removed session 21.