May 17 00:13:21.191680 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 00:13:21.191702 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:13:21.191711 kernel: KASLR enabled May 17 00:13:21.191717 kernel: efi: EFI v2.7 by American Megatrends May 17 00:13:21.191723 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea468818 RNG=0xebf00018 MEMRESERVE=0xe45d2f98 May 17 00:13:21.191728 kernel: random: crng init done May 17 00:13:21.191736 kernel: esrt: Reserving ESRT space from 0x00000000ea468818 to 0x00000000ea468878. May 17 00:13:21.191742 kernel: ACPI: Early table checksum verification disabled May 17 00:13:21.191749 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 00:13:21.191755 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 00:13:21.191762 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 00:13:21.191768 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 00:13:21.191774 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 00:13:21.191780 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 00:13:21.191789 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 00:13:21.191795 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:13:21.191802 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 00:13:21.191809 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:13:21.191815 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 00:13:21.191821 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191828 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191834 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191841 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191849 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 00:13:21.191855 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 00:13:21.191861 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:13:21.191868 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 00:13:21.191874 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 00:13:21.191881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 00:13:21.191887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 00:13:21.191893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 00:13:21.191900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 00:13:21.191906 kernel: NUMA: NODE_DATA [mem 0x83fdffc9800-0x83fdffcefff] May 17 00:13:21.191912 kernel: Zone ranges: May 17 00:13:21.191919 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 00:13:21.191926 kernel: DMA32 empty May 17 00:13:21.191932 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 00:13:21.191939 kernel: Movable zone start for each node May 17 00:13:21.191945 kernel: Early memory node ranges May 17 00:13:21.191952 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 00:13:21.191961 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 00:13:21.191968 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 00:13:21.191976 kernel: node 0: [mem 0x0000000094000000-0x00000000eba2dfff] May 17 00:13:21.191983 kernel: node 0: [mem 0x00000000eba2e000-0x00000000ebeaffff] May 17 00:13:21.192049 kernel: node 0: [mem 0x00000000ebeb0000-0x00000000ebeb9fff] May 17 00:13:21.192056 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] May 17 00:13:21.192063 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 00:13:21.192070 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 00:13:21.192076 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 00:13:21.192083 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 00:13:21.192090 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 00:13:21.192097 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 00:13:21.192106 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 00:13:21.192113 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 00:13:21.192119 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 00:13:21.192126 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 00:13:21.192133 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 00:13:21.192140 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 00:13:21.192146 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 00:13:21.192153 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 00:13:21.192160 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 00:13:21.192167 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 00:13:21.192173 kernel: psci: probing for conduit method from ACPI. May 17 00:13:21.192181 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:13:21.192188 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:13:21.192195 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:13:21.192202 kernel: psci: SMC Calling Convention v1.2 May 17 00:13:21.192208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 00:13:21.192215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 00:13:21.192222 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 00:13:21.192229 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 00:13:21.192235 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 00:13:21.192242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 00:13:21.192249 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 00:13:21.192255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 00:13:21.192263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 00:13:21.192270 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 00:13:21.192276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 00:13:21.192283 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 00:13:21.192289 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 00:13:21.192296 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 00:13:21.192303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 00:13:21.192309 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 00:13:21.192316 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 00:13:21.192323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 00:13:21.192329 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 00:13:21.192336 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 00:13:21.192344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 00:13:21.192351 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 00:13:21.192357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 00:13:21.192364 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 00:13:21.192371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 00:13:21.192378 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 00:13:21.192384 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 00:13:21.192391 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 00:13:21.192398 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 00:13:21.192404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 00:13:21.192411 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 00:13:21.192419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 00:13:21.192425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 00:13:21.192432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 00:13:21.192439 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 00:13:21.192446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 00:13:21.192452 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 00:13:21.192459 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 00:13:21.192466 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 00:13:21.192473 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 00:13:21.192480 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 00:13:21.192486 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 00:13:21.192493 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 00:13:21.192501 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 00:13:21.192508 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 00:13:21.192514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 00:13:21.192521 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 00:13:21.192528 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 00:13:21.192535 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 00:13:21.192541 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 00:13:21.192548 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 00:13:21.192561 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 00:13:21.192568 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 00:13:21.192576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 00:13:21.192584 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 00:13:21.192591 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 00:13:21.192598 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 00:13:21.192605 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 00:13:21.192612 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 00:13:21.192620 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 00:13:21.192628 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 00:13:21.192635 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 00:13:21.192642 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 00:13:21.192649 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 00:13:21.192656 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 00:13:21.192663 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 00:13:21.192670 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 00:13:21.192677 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 00:13:21.192684 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 00:13:21.192692 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 00:13:21.192699 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 00:13:21.192707 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 00:13:21.192714 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 00:13:21.192721 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 00:13:21.192728 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 00:13:21.192736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 00:13:21.192743 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 00:13:21.192750 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 00:13:21.192757 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 00:13:21.192764 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 00:13:21.192771 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:13:21.192778 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:13:21.192787 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 00:13:21.192794 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 00:13:21.192802 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 00:13:21.192809 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 00:13:21.192816 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 00:13:21.192823 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 00:13:21.192831 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 00:13:21.192838 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 00:13:21.192845 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 00:13:21.192852 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 00:13:21.192859 kernel: Detected PIPT I-cache on CPU0 May 17 00:13:21.192867 kernel: CPU features: detected: GIC system register CPU interface May 17 00:13:21.192875 kernel: CPU features: detected: Virtualization Host Extensions May 17 00:13:21.192882 kernel: CPU features: detected: Hardware dirty bit management May 17 00:13:21.192889 kernel: CPU features: detected: Spectre-v4 May 17 00:13:21.192896 kernel: CPU features: detected: Spectre-BHB May 17 00:13:21.192903 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:13:21.192911 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:13:21.192918 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:13:21.192925 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:13:21.192932 kernel: alternatives: applying boot alternatives May 17 00:13:21.192941 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:13:21.192950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:13:21.192957 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 00:13:21.192964 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 00:13:21.192971 kernel: printk: log_buf_len min size: 262144 bytes May 17 00:13:21.192978 kernel: printk: log_buf_len: 1048576 bytes May 17 00:13:21.192985 kernel: printk: early log buf free: 249904(95%) May 17 00:13:21.192995 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 00:13:21.193003 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 00:13:21.193010 kernel: Fallback order for Node 0: 0 May 17 00:13:21.193017 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 00:13:21.193024 kernel: Policy zone: Normal May 17 00:13:21.193033 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:13:21.193040 kernel: software IO TLB: area num 128. May 17 00:13:21.193048 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 00:13:21.193055 kernel: Memory: 262922448K/268174336K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 5251888K reserved, 0K cma-reserved) May 17 00:13:21.193063 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 00:13:21.193070 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:13:21.193077 kernel: rcu: RCU event tracing is enabled. May 17 00:13:21.193085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 00:13:21.193092 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:13:21.193099 kernel: Tracing variant of Tasks RCU enabled. May 17 00:13:21.193107 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:13:21.193115 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 00:13:21.193123 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:13:21.193130 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 00:13:21.193137 kernel: GICv3: 672 SPIs implemented May 17 00:13:21.193144 kernel: GICv3: 0 Extended SPIs implemented May 17 00:13:21.193151 kernel: Root IRQ handler: gic_handle_irq May 17 00:13:21.193158 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:13:21.193165 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 00:13:21.193173 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 00:13:21.193180 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 00:13:21.193187 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 00:13:21.193194 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 00:13:21.193201 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 00:13:21.193210 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 00:13:21.193217 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 00:13:21.193224 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 00:13:21.193231 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 00:13:21.193238 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193246 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193253 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 00:13:21.193260 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193267 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193275 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 00:13:21.193282 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193291 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193298 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 00:13:21.193305 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193312 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193320 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 00:13:21.193327 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193334 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193341 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 00:13:21.193349 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193356 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193363 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 00:13:21.193372 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193380 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193387 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 00:13:21.193394 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193401 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193409 kernel: GICv3: using LPI property table @0x00000800003e0000 May 17 00:13:21.193416 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 17 00:13:21.193423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:13:21.193430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193438 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 00:13:21.193445 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 00:13:21.193453 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:13:21.193461 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:13:21.193468 kernel: Console: colour dummy device 80x25 May 17 00:13:21.193475 kernel: printk: console [tty0] enabled May 17 00:13:21.193483 kernel: ACPI: Core revision 20230628 May 17 00:13:21.193490 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:13:21.193497 kernel: pid_max: default: 81920 minimum: 640 May 17 00:13:21.193505 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:13:21.193512 kernel: landlock: Up and running. May 17 00:13:21.193519 kernel: SELinux: Initializing. May 17 00:13:21.193528 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:13:21.193536 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:13:21.193543 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:13:21.193551 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:13:21.193558 kernel: rcu: Hierarchical SRCU implementation. May 17 00:13:21.193566 kernel: rcu: Max phase no-delay instances is 400. May 17 00:13:21.193573 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 00:13:21.193580 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 00:13:21.193588 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 00:13:21.193596 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 00:13:21.193604 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 00:13:21.193611 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 00:13:21.193618 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 00:13:21.193625 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 00:13:21.193633 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 00:13:21.193640 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 00:13:21.193647 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 00:13:21.193655 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 00:13:21.193663 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 00:13:21.193670 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 00:13:21.193678 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 00:13:21.193685 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 00:13:21.193692 kernel: Remapping and enabling EFI services. May 17 00:13:21.193700 kernel: smp: Bringing up secondary CPUs ... May 17 00:13:21.193707 kernel: Detected PIPT I-cache on CPU1 May 17 00:13:21.193714 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 00:13:21.193722 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 17 00:13:21.193731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193738 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 00:13:21.193745 kernel: Detected PIPT I-cache on CPU2 May 17 00:13:21.193753 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 00:13:21.193760 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 17 00:13:21.193767 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193775 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 00:13:21.193782 kernel: Detected PIPT I-cache on CPU3 May 17 00:13:21.193789 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 00:13:21.193797 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 17 00:13:21.193805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193812 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 00:13:21.193820 kernel: Detected PIPT I-cache on CPU4 May 17 00:13:21.193827 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 00:13:21.193834 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 17 00:13:21.193842 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193849 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 00:13:21.193856 kernel: Detected PIPT I-cache on CPU5 May 17 00:13:21.193863 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 00:13:21.193872 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 17 00:13:21.193880 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193887 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 00:13:21.193894 kernel: Detected PIPT I-cache on CPU6 May 17 00:13:21.193901 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 00:13:21.193909 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 17 00:13:21.193916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193923 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 00:13:21.193930 kernel: Detected PIPT I-cache on CPU7 May 17 00:13:21.193938 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 00:13:21.193946 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 17 00:13:21.193954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193961 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 00:13:21.193968 kernel: Detected PIPT I-cache on CPU8 May 17 00:13:21.193976 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 00:13:21.193983 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 17 00:13:21.193992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194000 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 00:13:21.194007 kernel: Detected PIPT I-cache on CPU9 May 17 00:13:21.194014 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 00:13:21.194023 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 17 00:13:21.194031 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194038 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 00:13:21.194045 kernel: Detected PIPT I-cache on CPU10 May 17 00:13:21.194053 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 00:13:21.194060 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 17 00:13:21.194067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194075 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 00:13:21.194082 kernel: Detected PIPT I-cache on CPU11 May 17 00:13:21.194091 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 00:13:21.194098 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 17 00:13:21.194105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194112 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 00:13:21.194120 kernel: Detected PIPT I-cache on CPU12 May 17 00:13:21.194127 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 00:13:21.194134 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 17 00:13:21.194141 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194148 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 00:13:21.194156 kernel: Detected PIPT I-cache on CPU13 May 17 00:13:21.194164 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 00:13:21.194172 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 17 00:13:21.194179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194187 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 00:13:21.194194 kernel: Detected PIPT I-cache on CPU14 May 17 00:13:21.194201 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 00:13:21.194208 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 17 00:13:21.194216 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194223 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 00:13:21.194232 kernel: Detected PIPT I-cache on CPU15 May 17 00:13:21.194239 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 00:13:21.194246 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 17 00:13:21.194254 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194261 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 00:13:21.194269 kernel: Detected PIPT I-cache on CPU16 May 17 00:13:21.194276 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 00:13:21.194283 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 17 00:13:21.194291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194308 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 00:13:21.194317 kernel: Detected PIPT I-cache on CPU17 May 17 00:13:21.194324 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 00:13:21.194332 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 17 00:13:21.194340 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194347 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 00:13:21.194355 kernel: Detected PIPT I-cache on CPU18 May 17 00:13:21.194362 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 00:13:21.194370 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 17 00:13:21.194379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194387 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 00:13:21.194395 kernel: Detected PIPT I-cache on CPU19 May 17 00:13:21.194402 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 00:13:21.194410 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 17 00:13:21.194418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194425 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 00:13:21.194435 kernel: Detected PIPT I-cache on CPU20 May 17 00:13:21.194443 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 00:13:21.194451 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 17 00:13:21.194459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194466 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 00:13:21.194474 kernel: Detected PIPT I-cache on CPU21 May 17 00:13:21.194482 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 00:13:21.194490 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 17 00:13:21.194497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194506 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 00:13:21.194514 kernel: Detected PIPT I-cache on CPU22 May 17 00:13:21.194523 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 00:13:21.194531 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 17 00:13:21.194538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194546 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 00:13:21.194554 kernel: Detected PIPT I-cache on CPU23 May 17 00:13:21.194561 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 00:13:21.194569 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 17 00:13:21.194578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194586 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 00:13:21.194593 kernel: Detected PIPT I-cache on CPU24 May 17 00:13:21.194601 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 00:13:21.194609 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 17 00:13:21.194617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194624 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 00:13:21.194632 kernel: Detected PIPT I-cache on CPU25 May 17 00:13:21.194640 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 00:13:21.194648 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 17 00:13:21.194656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194664 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 00:13:21.194673 kernel: Detected PIPT I-cache on CPU26 May 17 00:13:21.194681 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 00:13:21.194689 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 17 00:13:21.194697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194704 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 00:13:21.194712 kernel: Detected PIPT I-cache on CPU27 May 17 00:13:21.194720 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 00:13:21.194729 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 17 00:13:21.194736 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194744 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 00:13:21.194752 kernel: Detected PIPT I-cache on CPU28 May 17 00:13:21.194759 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 00:13:21.194767 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 17 00:13:21.194775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194782 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 00:13:21.194790 kernel: Detected PIPT I-cache on CPU29 May 17 00:13:21.194798 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 00:13:21.194807 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 17 00:13:21.194815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194823 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 00:13:21.194830 kernel: Detected PIPT I-cache on CPU30 May 17 00:13:21.194838 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 00:13:21.194846 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 17 00:13:21.194854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194862 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 00:13:21.194869 kernel: Detected PIPT I-cache on CPU31 May 17 00:13:21.194878 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 00:13:21.194886 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 17 00:13:21.194894 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194902 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 00:13:21.194909 kernel: Detected PIPT I-cache on CPU32 May 17 00:13:21.194917 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 00:13:21.194924 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 17 00:13:21.194932 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194940 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 00:13:21.194948 kernel: Detected PIPT I-cache on CPU33 May 17 00:13:21.194957 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 00:13:21.194964 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 17 00:13:21.194972 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194980 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 00:13:21.194989 kernel: Detected PIPT I-cache on CPU34 May 17 00:13:21.194997 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 00:13:21.195005 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 17 00:13:21.195013 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195020 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 00:13:21.195030 kernel: Detected PIPT I-cache on CPU35 May 17 00:13:21.195038 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 00:13:21.195046 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 17 00:13:21.195053 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195061 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 00:13:21.195069 kernel: Detected PIPT I-cache on CPU36 May 17 00:13:21.195076 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 00:13:21.195084 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 17 00:13:21.195092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195099 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 00:13:21.195108 kernel: Detected PIPT I-cache on CPU37 May 17 00:13:21.195116 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 00:13:21.195124 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 17 00:13:21.195131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195139 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 00:13:21.195146 kernel: Detected PIPT I-cache on CPU38 May 17 00:13:21.195154 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 00:13:21.195162 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 17 00:13:21.195170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195179 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 00:13:21.195186 kernel: Detected PIPT I-cache on CPU39 May 17 00:13:21.195194 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 00:13:21.195203 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 17 00:13:21.195211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195218 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 00:13:21.195226 kernel: Detected PIPT I-cache on CPU40 May 17 00:13:21.195234 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 00:13:21.195243 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 17 00:13:21.195250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195258 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 00:13:21.195266 kernel: Detected PIPT I-cache on CPU41 May 17 00:13:21.195274 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 00:13:21.195281 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 17 00:13:21.195289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195297 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 00:13:21.195304 kernel: Detected PIPT I-cache on CPU42 May 17 00:13:21.195313 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 00:13:21.195321 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 17 00:13:21.195329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195336 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 00:13:21.195344 kernel: Detected PIPT I-cache on CPU43 May 17 00:13:21.195352 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 00:13:21.195359 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 17 00:13:21.195367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195375 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 00:13:21.195382 kernel: Detected PIPT I-cache on CPU44 May 17 00:13:21.195391 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 00:13:21.195399 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 17 00:13:21.195407 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195415 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 00:13:21.195422 kernel: Detected PIPT I-cache on CPU45 May 17 00:13:21.195430 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 00:13:21.195437 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 17 00:13:21.195445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195453 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 00:13:21.195462 kernel: Detected PIPT I-cache on CPU46 May 17 00:13:21.195469 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 00:13:21.195477 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 17 00:13:21.195485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195492 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 00:13:21.195500 kernel: Detected PIPT I-cache on CPU47 May 17 00:13:21.195508 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 00:13:21.195515 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 17 00:13:21.195523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195531 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 00:13:21.195539 kernel: Detected PIPT I-cache on CPU48 May 17 00:13:21.195547 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 00:13:21.195555 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 17 00:13:21.195563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195570 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 00:13:21.195578 kernel: Detected PIPT I-cache on CPU49 May 17 00:13:21.195586 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 00:13:21.195593 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 17 00:13:21.195601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195610 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 00:13:21.195618 kernel: Detected PIPT I-cache on CPU50 May 17 00:13:21.195625 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 00:13:21.195633 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 17 00:13:21.195641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195649 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 00:13:21.195656 kernel: Detected PIPT I-cache on CPU51 May 17 00:13:21.195665 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 00:13:21.195673 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 17 00:13:21.195682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195690 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 00:13:21.195697 kernel: Detected PIPT I-cache on CPU52 May 17 00:13:21.195705 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 00:13:21.195713 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 17 00:13:21.195720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195728 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 00:13:21.195736 kernel: Detected PIPT I-cache on CPU53 May 17 00:13:21.195743 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 00:13:21.195751 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 17 00:13:21.195761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195768 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 00:13:21.195776 kernel: Detected PIPT I-cache on CPU54 May 17 00:13:21.195784 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 00:13:21.195791 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 17 00:13:21.195799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195807 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 00:13:21.195814 kernel: Detected PIPT I-cache on CPU55 May 17 00:13:21.195822 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 00:13:21.195831 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 17 00:13:21.195839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195847 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 00:13:21.195854 kernel: Detected PIPT I-cache on CPU56 May 17 00:13:21.195862 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 00:13:21.195870 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 17 00:13:21.195878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195885 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 00:13:21.195893 kernel: Detected PIPT I-cache on CPU57 May 17 00:13:21.195901 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 00:13:21.195910 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 17 00:13:21.195918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195925 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 00:13:21.195933 kernel: Detected PIPT I-cache on CPU58 May 17 00:13:21.195940 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 00:13:21.195948 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 17 00:13:21.195956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195964 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 00:13:21.195971 kernel: Detected PIPT I-cache on CPU59 May 17 00:13:21.195980 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 00:13:21.195996 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 17 00:13:21.196004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196012 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 00:13:21.196020 kernel: Detected PIPT I-cache on CPU60 May 17 00:13:21.196028 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 00:13:21.196036 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 17 00:13:21.196044 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196051 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 00:13:21.196059 kernel: Detected PIPT I-cache on CPU61 May 17 00:13:21.196069 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 00:13:21.196076 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 17 00:13:21.196084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196092 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 00:13:21.196100 kernel: Detected PIPT I-cache on CPU62 May 17 00:13:21.196107 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 00:13:21.196115 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 17 00:13:21.196123 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196130 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 00:13:21.196139 kernel: Detected PIPT I-cache on CPU63 May 17 00:13:21.196147 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 00:13:21.196155 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 17 00:13:21.196163 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196170 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 00:13:21.196178 kernel: Detected PIPT I-cache on CPU64 May 17 00:13:21.196186 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 00:13:21.196194 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 17 00:13:21.196202 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196209 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 00:13:21.196218 kernel: Detected PIPT I-cache on CPU65 May 17 00:13:21.196226 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 00:13:21.196234 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 17 00:13:21.196242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196249 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 00:13:21.196257 kernel: Detected PIPT I-cache on CPU66 May 17 00:13:21.196264 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 00:13:21.196272 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 17 00:13:21.196280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196289 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 00:13:21.196297 kernel: Detected PIPT I-cache on CPU67 May 17 00:13:21.196304 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 00:13:21.196312 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 17 00:13:21.196320 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196327 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 00:13:21.196335 kernel: Detected PIPT I-cache on CPU68 May 17 00:13:21.196343 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 00:13:21.196351 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 17 00:13:21.196360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196367 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 00:13:21.196375 kernel: Detected PIPT I-cache on CPU69 May 17 00:13:21.196383 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 00:13:21.196391 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 17 00:13:21.196398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196406 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 00:13:21.196414 kernel: Detected PIPT I-cache on CPU70 May 17 00:13:21.196421 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 00:13:21.196429 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 17 00:13:21.196438 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196446 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 00:13:21.196453 kernel: Detected PIPT I-cache on CPU71 May 17 00:13:21.196461 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 00:13:21.196469 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 17 00:13:21.196476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196484 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 00:13:21.196492 kernel: Detected PIPT I-cache on CPU72 May 17 00:13:21.196500 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 00:13:21.196509 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 17 00:13:21.196517 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196524 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 00:13:21.196532 kernel: Detected PIPT I-cache on CPU73 May 17 00:13:21.196539 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 00:13:21.196547 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 17 00:13:21.196555 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196563 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 00:13:21.196570 kernel: Detected PIPT I-cache on CPU74 May 17 00:13:21.196578 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 00:13:21.196587 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 17 00:13:21.196595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196603 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 00:13:21.196610 kernel: Detected PIPT I-cache on CPU75 May 17 00:13:21.196618 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 00:13:21.196626 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 17 00:13:21.196633 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196641 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 00:13:21.196649 kernel: Detected PIPT I-cache on CPU76 May 17 00:13:21.196658 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 00:13:21.196666 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 17 00:13:21.196673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196681 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 00:13:21.196689 kernel: Detected PIPT I-cache on CPU77 May 17 00:13:21.196696 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 00:13:21.196704 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 17 00:13:21.196712 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196719 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 00:13:21.196727 kernel: Detected PIPT I-cache on CPU78 May 17 00:13:21.196736 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 00:13:21.196744 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 17 00:13:21.196751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196759 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 00:13:21.196767 kernel: Detected PIPT I-cache on CPU79 May 17 00:13:21.196774 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 00:13:21.196782 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 17 00:13:21.196790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196797 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 00:13:21.196806 kernel: smp: Brought up 1 node, 80 CPUs May 17 00:13:21.196814 kernel: SMP: Total of 80 processors activated. May 17 00:13:21.196822 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:13:21.196829 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:13:21.196837 kernel: CPU features: detected: Common not Private translations May 17 00:13:21.196845 kernel: CPU features: detected: CRC32 instructions May 17 00:13:21.196853 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:13:21.196860 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:13:21.196868 kernel: CPU features: detected: LSE atomic instructions May 17 00:13:21.196877 kernel: CPU features: detected: Privileged Access Never May 17 00:13:21.196884 kernel: CPU features: detected: RAS Extension Support May 17 00:13:21.196892 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:13:21.196900 kernel: CPU: All CPU(s) started at EL2 May 17 00:13:21.196907 kernel: alternatives: applying system-wide alternatives May 17 00:13:21.196915 kernel: devtmpfs: initialized May 17 00:13:21.196923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:13:21.196930 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 00:13:21.196938 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:13:21.196947 kernel: SMBIOS 3.4.0 present. May 17 00:13:21.196955 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 00:13:21.196963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:13:21.196971 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 00:13:21.196978 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:13:21.196986 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:13:21.196997 kernel: audit: initializing netlink subsys (disabled) May 17 00:13:21.197004 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 17 00:13:21.197012 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:13:21.197021 kernel: cpuidle: using governor menu May 17 00:13:21.197029 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:13:21.197036 kernel: ASID allocator initialised with 32768 entries May 17 00:13:21.197044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:13:21.197052 kernel: Serial: AMBA PL011 UART driver May 17 00:13:21.197059 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:13:21.197067 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:13:21.197075 kernel: Modules: 509024 pages in range for PLT usage May 17 00:13:21.197083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:13:21.197092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:13:21.197099 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:13:21.197107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:13:21.197115 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:13:21.197123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:13:21.197130 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:13:21.197138 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:13:21.197146 kernel: ACPI: Added _OSI(Module Device) May 17 00:13:21.197153 kernel: ACPI: Added _OSI(Processor Device) May 17 00:13:21.197162 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:13:21.197170 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:13:21.197178 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 00:13:21.197185 kernel: ACPI: Interpreter enabled May 17 00:13:21.197193 kernel: ACPI: Using GIC for interrupt routing May 17 00:13:21.197200 kernel: ACPI: MCFG table detected, 8 entries May 17 00:13:21.197208 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197216 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197224 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197233 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197241 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197249 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197256 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197264 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197272 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 00:13:21.197280 kernel: printk: console [ttyAMA0] enabled May 17 00:13:21.197288 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 00:13:21.197296 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 00:13:21.197423 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.197498 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.197564 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.197627 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.197689 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.197751 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 00:13:21.197764 kernel: PCI host bridge to bus 000d:00 May 17 00:13:21.197837 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 00:13:21.197895 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 00:13:21.197952 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 00:13:21.198036 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.198116 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.198183 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 00:13:21.198252 kernel: pci 000d:00:01.0: supports D1 D2 May 17 00:13:21.198319 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198393 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.198460 kernel: pci 000d:00:02.0: supports D1 D2 May 17 00:13:21.198525 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198598 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.198666 kernel: pci 000d:00:03.0: supports D1 D2 May 17 00:13:21.198733 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198805 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.198872 kernel: pci 000d:00:04.0: supports D1 D2 May 17 00:13:21.198938 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198948 kernel: acpiphp: Slot [1] registered May 17 00:13:21.198956 kernel: acpiphp: Slot [2] registered May 17 00:13:21.198964 kernel: acpiphp: Slot [3] registered May 17 00:13:21.198974 kernel: acpiphp: Slot [4] registered May 17 00:13:21.199037 kernel: pci_bus 000d:00: on NUMA node 0 May 17 00:13:21.199105 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.199172 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.199238 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.199305 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.199369 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.199438 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.199505 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.199574 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.199640 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.199706 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.199771 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.199836 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.199905 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 00:13:21.199970 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:13:21.200039 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 00:13:21.200105 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:13:21.200171 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 00:13:21.200236 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:13:21.200302 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 00:13:21.200367 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:13:21.200435 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200499 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200564 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200630 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200694 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200759 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200824 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200891 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200956 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201027 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201092 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201158 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201223 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201288 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201353 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201419 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201485 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.201551 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 00:13:21.201617 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:13:21.201683 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.201748 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 00:13:21.201813 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:13:21.201882 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.201946 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 00:13:21.202016 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:13:21.202080 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.202146 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 00:13:21.202211 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:13:21.202274 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 00:13:21.202331 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 00:13:21.202402 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 00:13:21.202462 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:13:21.202533 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 00:13:21.202594 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:13:21.202674 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 00:13:21.202735 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:13:21.202803 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 00:13:21.202864 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:13:21.202874 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 00:13:21.202944 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.203016 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.203079 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.203142 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.203204 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 00:13:21.203268 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 00:13:21.203278 kernel: PCI host bridge to bus 0000:00 May 17 00:13:21.203343 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 00:13:21.203407 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:13:21.203464 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:13:21.203540 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.203613 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.203679 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 00:13:21.203744 kernel: pci 0000:00:01.0: supports D1 D2 May 17 00:13:21.203808 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.203885 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.203950 kernel: pci 0000:00:02.0: supports D1 D2 May 17 00:13:21.204019 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.204092 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.204159 kernel: pci 0000:00:03.0: supports D1 D2 May 17 00:13:21.204223 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.204296 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.204364 kernel: pci 0000:00:04.0: supports D1 D2 May 17 00:13:21.204430 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.204440 kernel: acpiphp: Slot [1-1] registered May 17 00:13:21.204448 kernel: acpiphp: Slot [2-1] registered May 17 00:13:21.204456 kernel: acpiphp: Slot [3-1] registered May 17 00:13:21.204464 kernel: acpiphp: Slot [4-1] registered May 17 00:13:21.204519 kernel: pci_bus 0000:00: on NUMA node 0 May 17 00:13:21.204585 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.204649 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.204717 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.204781 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.204846 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.204912 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.204977 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.205046 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.205113 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.205180 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.205244 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.205309 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.205374 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 00:13:21.205440 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:13:21.205505 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 00:13:21.205573 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:13:21.205638 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 00:13:21.205703 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:13:21.205768 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 00:13:21.205834 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:13:21.205897 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.205963 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206032 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206098 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206163 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206227 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206293 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206356 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206422 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206485 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206550 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206615 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206682 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206746 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206812 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206876 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206940 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.207009 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 00:13:21.207075 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:13:21.207140 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.207207 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 00:13:21.207275 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:13:21.207341 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.207409 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 00:13:21.207474 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:13:21.207540 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.207604 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 00:13:21.207670 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:13:21.207730 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 00:13:21.207790 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:13:21.207860 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 00:13:21.207923 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:13:21.207993 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 00:13:21.208055 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:13:21.208130 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 00:13:21.208196 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:13:21.208265 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 00:13:21.208325 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:13:21.208335 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 00:13:21.208406 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.208470 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.208534 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.208598 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.208660 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 00:13:21.208722 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 00:13:21.208733 kernel: PCI host bridge to bus 0005:00 May 17 00:13:21.208800 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 00:13:21.208857 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:13:21.208915 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 00:13:21.208991 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.209067 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.209134 kernel: pci 0005:00:01.0: supports D1 D2 May 17 00:13:21.209199 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209274 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.209340 kernel: pci 0005:00:03.0: supports D1 D2 May 17 00:13:21.209409 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209481 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.209547 kernel: pci 0005:00:05.0: supports D1 D2 May 17 00:13:21.209612 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209687 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:13:21.209754 kernel: pci 0005:00:07.0: supports D1 D2 May 17 00:13:21.209820 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209832 kernel: acpiphp: Slot [1-2] registered May 17 00:13:21.209841 kernel: acpiphp: Slot [2-2] registered May 17 00:13:21.209912 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 00:13:21.209983 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 00:13:21.210054 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 00:13:21.210129 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 00:13:21.210198 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 00:13:21.210267 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 00:13:21.210328 kernel: pci_bus 0005:00: on NUMA node 0 May 17 00:13:21.210393 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.210460 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.210544 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.210614 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.210680 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.210750 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.210815 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.210881 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.210947 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:13:21.211028 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.211096 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.211161 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 00:13:21.211234 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 00:13:21.211299 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:13:21.211366 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 00:13:21.211431 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:13:21.211497 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 00:13:21.211561 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:13:21.211627 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 00:13:21.211692 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:13:21.211759 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.211824 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.211891 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.211957 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212026 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212093 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212158 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212224 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212291 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212357 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212422 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212487 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212553 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212619 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212684 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212749 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212813 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.212880 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 00:13:21.212946 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:13:21.213018 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 00:13:21.213084 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 00:13:21.213151 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:13:21.213220 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 00:13:21.213290 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 00:13:21.213355 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 00:13:21.213419 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 00:13:21.213485 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:13:21.213552 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 00:13:21.213620 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 00:13:21.213684 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 00:13:21.213753 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 00:13:21.213819 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:13:21.213880 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 00:13:21.213938 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:13:21.214010 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 00:13:21.214073 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:13:21.214150 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 00:13:21.214212 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:13:21.214279 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 00:13:21.214342 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:13:21.214410 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 00:13:21.214474 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:13:21.214484 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 00:13:21.214554 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.214618 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.214691 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.214757 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.214821 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.214883 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 00:13:21.214896 kernel: PCI host bridge to bus 0003:00 May 17 00:13:21.214963 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 00:13:21.215025 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 00:13:21.215084 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 00:13:21.215161 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.215238 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.215310 kernel: pci 0003:00:01.0: supports D1 D2 May 17 00:13:21.215377 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.215456 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.215522 kernel: pci 0003:00:03.0: supports D1 D2 May 17 00:13:21.215589 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.215661 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.215727 kernel: pci 0003:00:05.0: supports D1 D2 May 17 00:13:21.215795 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.215806 kernel: acpiphp: Slot [1-3] registered May 17 00:13:21.215813 kernel: acpiphp: Slot [2-3] registered May 17 00:13:21.215885 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 00:13:21.215952 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 00:13:21.216025 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 00:13:21.216093 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 00:13:21.216160 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:13:21.216230 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 00:13:21.216296 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:13:21.216364 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 00:13:21.216431 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:13:21.216499 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 00:13:21.216574 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 00:13:21.216642 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 00:13:21.216709 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 00:13:21.216778 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 00:13:21.216846 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 00:13:21.216912 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 00:13:21.216980 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:13:21.217051 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 00:13:21.217121 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:13:21.217182 kernel: pci_bus 0003:00: on NUMA node 0 May 17 00:13:21.217250 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.217314 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.217380 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.217449 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.217515 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.217582 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.217651 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 00:13:21.217718 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 00:13:21.217782 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:13:21.217848 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:13:21.217925 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:13:21.217996 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:13:21.218062 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:13:21.218128 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:13:21.218196 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218262 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218327 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218393 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218458 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218524 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218589 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218654 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218721 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218786 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218851 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218916 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218982 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.219050 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:13:21.219116 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:13:21.219181 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 00:13:21.219249 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 00:13:21.219316 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:13:21.219387 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 00:13:21.219455 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 00:13:21.219523 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 00:13:21.219591 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 00:13:21.219660 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 00:13:21.219728 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 00:13:21.219795 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 00:13:21.219863 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 00:13:21.219929 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220000 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220068 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220138 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220206 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220275 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220343 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220410 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220476 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 00:13:21.220541 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 00:13:21.220610 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:13:21.220669 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:13:21.220728 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 00:13:21.220786 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 00:13:21.220864 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:13:21.220926 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:13:21.221237 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:13:21.221310 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:13:21.221378 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:13:21.221437 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:13:21.221449 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 00:13:21.221518 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.221581 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.221647 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.221709 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.221771 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.221832 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 00:13:21.221843 kernel: PCI host bridge to bus 000c:00 May 17 00:13:21.221908 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 00:13:21.221965 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 00:13:21.222032 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 00:13:21.222108 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.222185 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.222250 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 00:13:21.222315 kernel: pci 000c:00:01.0: supports D1 D2 May 17 00:13:21.222381 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.222454 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.222521 kernel: pci 000c:00:02.0: supports D1 D2 May 17 00:13:21.222586 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.222659 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.222724 kernel: pci 000c:00:03.0: supports D1 D2 May 17 00:13:21.222789 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.222859 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.222925 kernel: pci 000c:00:04.0: supports D1 D2 May 17 00:13:21.222994 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.223005 kernel: acpiphp: Slot [1-4] registered May 17 00:13:21.223013 kernel: acpiphp: Slot [2-4] registered May 17 00:13:21.223021 kernel: acpiphp: Slot [3-2] registered May 17 00:13:21.223030 kernel: acpiphp: Slot [4-2] registered May 17 00:13:21.223086 kernel: pci_bus 000c:00: on NUMA node 0 May 17 00:13:21.223150 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.223216 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.223282 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.223347 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.223411 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.223476 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.223540 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.223604 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.223668 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.223735 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.223799 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.223863 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.223928 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 00:13:21.223995 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:13:21.224062 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 00:13:21.224125 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:13:21.224193 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 00:13:21.224257 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:13:21.224321 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 00:13:21.224385 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:13:21.224450 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224514 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224578 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224642 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224709 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224772 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224837 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224901 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224964 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225032 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225095 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225160 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225226 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225290 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225354 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225419 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225482 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.225547 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 00:13:21.225610 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:13:21.225675 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.225741 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 00:13:21.225806 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:13:21.225871 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.225934 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 00:13:21.226002 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:13:21.226067 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.226135 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 00:13:21.226198 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:13:21.226258 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 00:13:21.226315 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 00:13:21.226384 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 00:13:21.226443 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:13:21.226519 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 00:13:21.226582 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:13:21.226649 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 00:13:21.226709 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:13:21.226776 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 00:13:21.226835 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:13:21.226846 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 00:13:21.226917 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.226980 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.227046 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.227108 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.227170 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.227231 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 00:13:21.227242 kernel: PCI host bridge to bus 0002:00 May 17 00:13:21.227309 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 00:13:21.227368 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 00:13:21.227424 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 00:13:21.227496 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.227567 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.227633 kernel: pci 0002:00:01.0: supports D1 D2 May 17 00:13:21.227701 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.227773 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.227839 kernel: pci 0002:00:03.0: supports D1 D2 May 17 00:13:21.227903 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.227975 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.228045 kernel: pci 0002:00:05.0: supports D1 D2 May 17 00:13:21.228110 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.228186 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:13:21.228251 kernel: pci 0002:00:07.0: supports D1 D2 May 17 00:13:21.228314 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 00:13:21.228325 kernel: acpiphp: Slot [1-5] registered May 17 00:13:21.228333 kernel: acpiphp: Slot [2-5] registered May 17 00:13:21.228341 kernel: acpiphp: Slot [3-3] registered May 17 00:13:21.228349 kernel: acpiphp: Slot [4-3] registered May 17 00:13:21.228406 kernel: pci_bus 0002:00: on NUMA node 0 May 17 00:13:21.228472 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.228541 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.228605 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.228674 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.228739 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.228806 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.228872 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.228936 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.229004 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.229071 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.229137 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.229202 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.229270 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 00:13:21.229335 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:13:21.229399 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 00:13:21.229463 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:13:21.229528 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 00:13:21.229592 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:13:21.229656 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 00:13:21.229722 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:13:21.229790 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.229858 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.229923 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.229990 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230056 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230121 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230196 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230266 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230330 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230396 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230460 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230526 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230590 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230654 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230718 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230783 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230850 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.230915 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 00:13:21.230982 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:13:21.231135 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 00:13:21.231201 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 00:13:21.231265 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:13:21.231329 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 00:13:21.231397 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 00:13:21.231461 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:13:21.231525 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 00:13:21.231589 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 00:13:21.231653 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:13:21.231713 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 00:13:21.231772 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 00:13:21.231841 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 00:13:21.231900 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:13:21.231967 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 00:13:21.232031 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:13:21.232105 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 00:13:21.232169 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:13:21.232236 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 00:13:21.232296 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:13:21.232307 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 00:13:21.232377 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.232441 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.232504 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.232568 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.232631 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 00:13:21.232693 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 00:13:21.232704 kernel: PCI host bridge to bus 0001:00 May 17 00:13:21.232768 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 00:13:21.232826 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 00:13:21.232886 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 00:13:21.232956 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.233034 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.233099 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 00:13:21.233165 kernel: pci 0001:00:01.0: supports D1 D2 May 17 00:13:21.233237 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233310 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.233378 kernel: pci 0001:00:02.0: supports D1 D2 May 17 00:13:21.233443 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233514 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.233579 kernel: pci 0001:00:03.0: supports D1 D2 May 17 00:13:21.233644 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233715 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.233784 kernel: pci 0001:00:04.0: supports D1 D2 May 17 00:13:21.233849 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233860 kernel: acpiphp: Slot [1-6] registered May 17 00:13:21.233932 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:13:21.234006 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:13:21.234077 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 00:13:21.234142 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 00:13:21.234209 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:13:21.234285 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:13:21.234353 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:13:21.234420 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 00:13:21.234486 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 00:13:21.234497 kernel: acpiphp: Slot [2-6] registered May 17 00:13:21.234505 kernel: acpiphp: Slot [3-4] registered May 17 00:13:21.234513 kernel: acpiphp: Slot [4-4] registered May 17 00:13:21.234572 kernel: pci_bus 0001:00: on NUMA node 0 May 17 00:13:21.234637 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.234703 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.234767 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.234831 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.234896 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.234961 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.235128 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.235202 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.235268 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.235332 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.235396 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:13:21.235460 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 00:13:21.235525 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 00:13:21.235591 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:13:21.235655 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 00:13:21.235719 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:13:21.235782 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 00:13:21.235848 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:13:21.235912 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.235977 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236044 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236112 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236175 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236239 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236303 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236366 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236431 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236495 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236558 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236621 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236687 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236751 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236815 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236878 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236945 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:13:21.237016 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:13:21.237083 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 00:13:21.237150 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 00:13:21.237216 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.237281 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 00:13:21.237344 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:13:21.237409 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.237473 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 00:13:21.237537 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:13:21.237603 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.237668 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 00:13:21.237732 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:13:21.237797 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.237861 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 00:13:21.237926 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:13:21.237986 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 00:13:21.238048 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 00:13:21.238123 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 00:13:21.238184 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:13:21.238251 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 00:13:21.238310 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:13:21.238377 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 00:13:21.238439 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:13:21.238506 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 00:13:21.238566 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:13:21.238577 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 00:13:21.238646 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.238709 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.238774 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.238836 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.238899 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 00:13:21.238961 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 00:13:21.238972 kernel: PCI host bridge to bus 0004:00 May 17 00:13:21.239038 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 00:13:21.239100 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 00:13:21.239158 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 00:13:21.239231 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.239302 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.239368 kernel: pci 0004:00:01.0: supports D1 D2 May 17 00:13:21.239432 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.239505 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.239571 kernel: pci 0004:00:03.0: supports D1 D2 May 17 00:13:21.239638 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.239710 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.239776 kernel: pci 0004:00:05.0: supports D1 D2 May 17 00:13:21.239840 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.239915 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:13:21.239982 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 00:13:21.240052 kernel: pci 0004:01:00.0: supports D1 D2 May 17 00:13:21.240122 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:13:21.240201 kernel: pci_bus 0004:02: extended config space not accessible May 17 00:13:21.240280 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:13:21.240349 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 00:13:21.240419 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 00:13:21.240488 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 00:13:21.240556 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 00:13:21.240628 kernel: pci 0004:02:00.0: supports D1 D2 May 17 00:13:21.240697 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:13:21.240773 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 00:13:21.240840 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 00:13:21.240907 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:13:21.240968 kernel: pci_bus 0004:00: on NUMA node 0 May 17 00:13:21.241037 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 00:13:21.241105 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.241170 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.241234 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:13:21.241299 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.241364 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.241428 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.241493 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:13:21.241559 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:13:21.241624 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 00:13:21.241688 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:13:21.241753 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 00:13:21.241816 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:13:21.241881 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.241945 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242016 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242082 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242146 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242210 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242274 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242338 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242402 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242467 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242530 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242597 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242664 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:13:21.242731 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242797 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242867 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 00:13:21.242937 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 00:13:21.243008 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 00:13:21.243077 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 00:13:21.243146 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 00:13:21.243215 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:13:21.243281 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 00:13:21.243346 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:13:21.243411 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:13:21.243478 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 00:13:21.243544 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.243609 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 00:13:21.243676 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:13:21.243741 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 00:13:21.243806 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 00:13:21.243870 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:13:21.243930 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:13:21.243986 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 00:13:21.244050 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 00:13:21.244118 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:13:21.244179 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:13:21.244242 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:13:21.244309 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 00:13:21.244369 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:13:21.244439 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 00:13:21.244499 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:13:21.244510 kernel: iommu: Default domain type: Translated May 17 00:13:21.244519 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:13:21.244527 kernel: efivars: Registered efivars operations May 17 00:13:21.244594 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 17 00:13:21.244664 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 00:13:21.244733 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 00:13:21.244746 kernel: vgaarb: loaded May 17 00:13:21.244754 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:13:21.244763 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:13:21.244771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:13:21.244779 kernel: pnp: PnP ACPI init May 17 00:13:21.244851 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 00:13:21.244914 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 00:13:21.244977 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 00:13:21.245039 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 00:13:21.245100 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 00:13:21.245161 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 00:13:21.245222 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 00:13:21.245282 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 00:13:21.245293 kernel: pnp: PnP ACPI: found 1 devices May 17 00:13:21.245304 kernel: NET: Registered PF_INET protocol family May 17 00:13:21.245312 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245321 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:13:21.245329 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:13:21.245337 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:13:21.245346 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245354 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 00:13:21.245363 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245371 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245381 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:13:21.245448 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 00:13:21.245460 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 00:13:21.245468 kernel: kvm [1]: GICv3: no GICV resource entry May 17 00:13:21.245476 kernel: kvm [1]: disabling GICv2 emulation May 17 00:13:21.245484 kernel: kvm [1]: GIC system register CPU interface enabled May 17 00:13:21.245492 kernel: kvm [1]: vgic interrupt IRQ9 May 17 00:13:21.245500 kernel: kvm [1]: VHE mode initialized successfully May 17 00:13:21.245509 kernel: Initialise system trusted keyrings May 17 00:13:21.245518 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 00:13:21.245526 kernel: Key type asymmetric registered May 17 00:13:21.245536 kernel: Asymmetric key parser 'x509' registered May 17 00:13:21.245544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:13:21.245552 kernel: io scheduler mq-deadline registered May 17 00:13:21.245561 kernel: io scheduler kyber registered May 17 00:13:21.245569 kernel: io scheduler bfq registered May 17 00:13:21.245577 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:13:21.245585 kernel: ACPI: button: Power Button [PWRB] May 17 00:13:21.245595 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 00:13:21.245603 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:13:21.245677 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 00:13:21.245740 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.245803 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.245864 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 00:13:21.245926 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 17 00:13:21.245991 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 17 00:13:21.246062 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 00:13:21.246123 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.246185 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.246246 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 00:13:21.246307 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 17 00:13:21.246368 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 17 00:13:21.246439 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 00:13:21.246502 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.246562 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.246624 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 00:13:21.246685 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 17 00:13:21.246747 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 17 00:13:21.246819 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 00:13:21.246882 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.246943 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.247008 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 00:13:21.247069 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 17 00:13:21.247133 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 17 00:13:21.247210 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 00:13:21.247276 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.247337 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.247398 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 00:13:21.247459 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 17 00:13:21.247520 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 17 00:13:21.247591 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 00:13:21.247654 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.247716 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.247776 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 00:13:21.247838 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 17 00:13:21.247898 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 17 00:13:21.247967 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 00:13:21.248031 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.248095 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.248157 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 00:13:21.248218 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 17 00:13:21.248278 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 17 00:13:21.248349 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 00:13:21.248412 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.248476 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.248539 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 00:13:21.248599 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 17 00:13:21.248661 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 17 00:13:21.248672 kernel: thunder_xcv, ver 1.0 May 17 00:13:21.248681 kernel: thunder_bgx, ver 1.0 May 17 00:13:21.248689 kernel: nicpf, ver 1.0 May 17 00:13:21.248697 kernel: nicvf, ver 1.0 May 17 00:13:21.248767 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:13:21.248830 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:13:19 UTC (1747440799) May 17 00:13:21.248841 kernel: efifb: probing for efifb May 17 00:13:21.248850 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 00:13:21.248858 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:13:21.248866 kernel: efifb: scrolling: redraw May 17 00:13:21.248874 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:13:21.248883 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:13:21.248893 kernel: fb0: EFI VGA frame buffer device May 17 00:13:21.248901 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 00:13:21.248910 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:13:21.248918 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:13:21.248926 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:13:21.248935 kernel: watchdog: Hard watchdog permanently disabled May 17 00:13:21.248943 kernel: NET: Registered PF_INET6 protocol family May 17 00:13:21.248951 kernel: Segment Routing with IPv6 May 17 00:13:21.248959 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:13:21.248969 kernel: NET: Registered PF_PACKET protocol family May 17 00:13:21.248977 kernel: Key type dns_resolver registered May 17 00:13:21.248985 kernel: registered taskstats version 1 May 17 00:13:21.248996 kernel: Loading compiled-in X.509 certificates May 17 00:13:21.249005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:13:21.249013 kernel: Key type .fscrypt registered May 17 00:13:21.249021 kernel: Key type fscrypt-provisioning registered May 17 00:13:21.249029 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:13:21.249037 kernel: ima: Allocated hash algorithm: sha1 May 17 00:13:21.249047 kernel: ima: No architecture policies found May 17 00:13:21.249055 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:13:21.249123 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 00:13:21.249192 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 00:13:21.249260 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 00:13:21.249326 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 00:13:21.249392 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 00:13:21.249458 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 00:13:21.249524 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 00:13:21.249593 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 00:13:21.249660 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 00:13:21.249726 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 00:13:21.249793 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 00:13:21.249859 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 00:13:21.249926 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 00:13:21.249996 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 00:13:21.250063 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 00:13:21.250131 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 00:13:21.250200 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 00:13:21.250265 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 00:13:21.250333 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 00:13:21.250398 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 00:13:21.250466 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 00:13:21.250532 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 00:13:21.250599 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 00:13:21.250667 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 00:13:21.250734 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 00:13:21.250801 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 00:13:21.250866 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 00:13:21.250932 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 00:13:21.251001 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 00:13:21.251069 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 00:13:21.251136 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 00:13:21.251207 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 00:13:21.251272 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 00:13:21.251341 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 00:13:21.251406 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 00:13:21.251472 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 00:13:21.251537 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 00:13:21.251603 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 00:13:21.251670 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 00:13:21.251736 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 00:13:21.251805 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 00:13:21.251870 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 00:13:21.251936 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 00:13:21.252004 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 00:13:21.252071 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 00:13:21.252136 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 00:13:21.252202 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 00:13:21.252267 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 00:13:21.252336 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 00:13:21.252401 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 00:13:21.252468 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 00:13:21.252532 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 00:13:21.252599 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 00:13:21.252664 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 00:13:21.252731 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 00:13:21.252796 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 00:13:21.252865 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 00:13:21.252930 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 00:13:21.252999 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 00:13:21.253065 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 00:13:21.253133 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 00:13:21.253144 kernel: clk: Disabling unused clocks May 17 00:13:21.253152 kernel: Freeing unused kernel memory: 39424K May 17 00:13:21.253160 kernel: Run /init as init process May 17 00:13:21.253170 kernel: with arguments: May 17 00:13:21.253179 kernel: /init May 17 00:13:21.253187 kernel: with environment: May 17 00:13:21.253195 kernel: HOME=/ May 17 00:13:21.253202 kernel: TERM=linux May 17 00:13:21.253210 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:13:21.253221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:13:21.253232 systemd[1]: Detected architecture arm64. May 17 00:13:21.253242 systemd[1]: Running in initrd. May 17 00:13:21.253250 systemd[1]: No hostname configured, using default hostname. May 17 00:13:21.253258 systemd[1]: Hostname set to . May 17 00:13:21.253267 systemd[1]: Initializing machine ID from random generator. May 17 00:13:21.253276 systemd[1]: Queued start job for default target initrd.target. May 17 00:13:21.253284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:21.253293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:21.253302 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:13:21.253312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:13:21.253321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:13:21.253330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:13:21.253340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:13:21.253349 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:13:21.253357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:21.253368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:21.253376 systemd[1]: Reached target paths.target - Path Units. May 17 00:13:21.253385 systemd[1]: Reached target slices.target - Slice Units. May 17 00:13:21.253393 systemd[1]: Reached target swap.target - Swaps. May 17 00:13:21.253402 systemd[1]: Reached target timers.target - Timer Units. May 17 00:13:21.253410 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:13:21.253419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:13:21.253428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:13:21.253438 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:13:21.253448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:21.253456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:13:21.253465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:21.253474 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:13:21.253482 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:13:21.253491 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:13:21.253500 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:13:21.253508 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:13:21.253517 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:13:21.253527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:13:21.253557 systemd-journald[899]: Collecting audit messages is disabled. May 17 00:13:21.253577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:21.253586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:13:21.253596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:13:21.253605 kernel: Bridge firewalling registered May 17 00:13:21.253614 systemd-journald[899]: Journal started May 17 00:13:21.253632 systemd-journald[899]: Runtime Journal (/run/log/journal/7af364b93484456d89cb6d5fc63f4e8d) is 8.0M, max 4.0G, 3.9G free. May 17 00:13:21.211066 systemd-modules-load[901]: Inserted module 'overlay' May 17 00:13:21.285743 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:13:21.233353 systemd-modules-load[901]: Inserted module 'br_netfilter' May 17 00:13:21.291401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:21.302215 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:13:21.313101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:13:21.323805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:21.348202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:13:21.365182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:13:21.371818 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:13:21.403136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:13:21.420098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:21.436947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:21.448578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:13:21.454497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:21.483184 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:13:21.490560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:13:21.502543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:13:21.527704 dracut-cmdline[941]: dracut-dracut-053 May 17 00:13:21.527704 dracut-cmdline[941]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:13:21.516619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:21.527049 systemd-resolved[944]: Positive Trust Anchors: May 17 00:13:21.527058 systemd-resolved[944]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:13:21.527090 systemd-resolved[944]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:13:21.542052 systemd-resolved[944]: Defaulting to hostname 'linux'. May 17 00:13:21.543426 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:13:21.579623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:21.683994 kernel: SCSI subsystem initialized May 17 00:13:21.698997 kernel: Loading iSCSI transport class v2.0-870. May 17 00:13:21.717001 kernel: iscsi: registered transport (tcp) May 17 00:13:21.744654 kernel: iscsi: registered transport (qla4xxx) May 17 00:13:21.744677 kernel: QLogic iSCSI HBA Driver May 17 00:13:21.788825 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:13:21.811160 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:13:21.850994 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:13:21.865522 kernel: device-mapper: uevent: version 1.0.3 May 17 00:13:21.865545 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:13:21.930999 kernel: raid6: neonx8 gen() 15842 MB/s May 17 00:13:21.955998 kernel: raid6: neonx4 gen() 15714 MB/s May 17 00:13:21.980998 kernel: raid6: neonx2 gen() 13280 MB/s May 17 00:13:22.005997 kernel: raid6: neonx1 gen() 10539 MB/s May 17 00:13:22.030998 kernel: raid6: int64x8 gen() 7001 MB/s May 17 00:13:22.055998 kernel: raid6: int64x4 gen() 7388 MB/s May 17 00:13:22.080994 kernel: raid6: int64x2 gen() 6150 MB/s May 17 00:13:22.108982 kernel: raid6: int64x1 gen() 5078 MB/s May 17 00:13:22.109018 kernel: raid6: using algorithm neonx8 gen() 15842 MB/s May 17 00:13:22.143387 kernel: raid6: .... xor() 11975 MB/s, rmw enabled May 17 00:13:22.143409 kernel: raid6: using neon recovery algorithm May 17 00:13:22.166362 kernel: xor: measuring software checksum speed May 17 00:13:22.166384 kernel: 8regs : 19669 MB/sec May 17 00:13:22.174305 kernel: 32regs : 19679 MB/sec May 17 00:13:22.182070 kernel: arm64_neon : 27079 MB/sec May 17 00:13:22.189710 kernel: xor: using function: arm64_neon (27079 MB/sec) May 17 00:13:22.250997 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:13:22.260681 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:13:22.275116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:22.288238 systemd-udevd[1135]: Using default interface naming scheme 'v255'. May 17 00:13:22.291248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:22.314131 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:13:22.328202 dracut-pre-trigger[1145]: rd.md=0: removing MD RAID activation May 17 00:13:22.354332 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:13:22.369096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:13:22.472373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:22.501291 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:13:22.501313 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:13:22.513165 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:13:22.684881 kernel: ACPI: bus type USB registered May 17 00:13:22.684896 kernel: usbcore: registered new interface driver usbfs May 17 00:13:22.684907 kernel: usbcore: registered new interface driver hub May 17 00:13:22.684917 kernel: usbcore: registered new device driver usb May 17 00:13:22.684927 kernel: PTP clock support registered May 17 00:13:22.684937 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 May 17 00:13:22.685089 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:13:22.685175 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 00:13:22.685259 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 00:13:22.685338 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:13:22.685349 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 May 17 00:13:22.685435 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:13:22.685446 kernel: igb 0003:03:00.0: Adding to iommu group 33 May 17 00:13:22.685533 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 17 00:13:22.685623 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 17 00:13:22.646391 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:13:22.693914 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:13:22.702060 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:22.719057 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:13:22.736201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:13:22.749654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:13:22.749707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:22.767348 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:13:22.778343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:13:22.921534 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 17 00:13:22.921759 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:13:22.921852 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 00:13:22.921930 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 00:13:22.922023 kernel: hub 1-0:1.0: USB hub found May 17 00:13:22.922128 kernel: hub 1-0:1.0: 4 ports detected May 17 00:13:22.922206 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 17 00:13:22.922296 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:13:22.922376 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:13:22.778386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:22.794967 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:22.935092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:22.945835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:13:22.965146 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:23.008985 kernel: hub 2-0:1.0: USB hub found May 17 00:13:23.009124 kernel: hub 2-0:1.0: 4 ports detected May 17 00:13:23.009206 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 00:13:23.009296 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 00:13:23.027181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:13:23.133079 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 00:13:23.133214 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 00:13:23.133291 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 00:13:23.133387 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:13:23.133466 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:94 May 17 00:13:23.133543 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 00:13:23.133621 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:13:23.133698 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 17 00:13:23.133784 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 00:13:23.133234 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:23.356080 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 00:13:23.356197 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:13:23.356209 kernel: GPT:9289727 != 1875385007 May 17 00:13:23.356219 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:13:23.356229 kernel: GPT:9289727 != 1875385007 May 17 00:13:23.356239 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:13:23.356253 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:13:23.356263 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 00:13:23.356354 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (1181) May 17 00:13:23.356368 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 00:13:23.356445 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1180) May 17 00:13:23.356456 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:95 May 17 00:13:23.356532 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 00:13:23.356610 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:13:23.356689 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 00:13:23.356772 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 00:13:23.356859 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 00:13:23.356937 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 00:13:23.297543 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 17 00:13:23.369150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 17 00:13:23.379837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:13:23.391226 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:13:23.411202 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 00:13:23.438138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:13:23.465414 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:13:23.465431 disk-uuid[1289]: Primary Header is updated. May 17 00:13:23.465431 disk-uuid[1289]: Secondary Entries is updated. May 17 00:13:23.465431 disk-uuid[1289]: Secondary Header is updated. May 17 00:13:23.501520 kernel: hub 1-3:1.0: USB hub found May 17 00:13:23.501673 kernel: hub 1-3:1.0: 4 ports detected May 17 00:13:23.590003 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 00:13:23.625134 kernel: hub 2-3:1.0: USB hub found May 17 00:13:23.625345 kernel: hub 2-3:1.0: 4 ports detected May 17 00:13:23.648000 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:13:23.660995 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 00:13:23.683556 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 17 00:13:23.683640 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:13:24.028606 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 00:13:24.332001 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:13:24.346996 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 00:13:24.363995 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 00:13:24.464004 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:13:24.464088 disk-uuid[1290]: The operation has completed successfully. May 17 00:13:24.485230 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:13:24.485316 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:13:24.522171 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:13:24.533277 sh[1477]: Success May 17 00:13:24.552993 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:13:24.585014 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:13:24.613162 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:13:24.623195 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:13:24.715807 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:13:24.715823 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:24.715833 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:13:24.715844 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:13:24.715854 kernel: BTRFS info (device dm-0): using free space tree May 17 00:13:24.715864 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:13:24.643824 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:13:24.721799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:13:24.732106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:13:24.813274 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:24.813289 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:24.813299 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:24.813309 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:24.813319 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:24.741834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:13:24.851353 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:24.841275 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:13:24.879101 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:13:24.922168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:13:24.944154 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:13:24.956875 ignition[1584]: Ignition 2.19.0 May 17 00:13:24.956882 ignition[1584]: Stage: fetch-offline May 17 00:13:24.962113 unknown[1584]: fetched base config from "system" May 17 00:13:24.956922 ignition[1584]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:24.962120 unknown[1584]: fetched user config from "system" May 17 00:13:24.956930 ignition[1584]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:24.968203 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:13:24.957081 ignition[1584]: parsed url from cmdline: "" May 17 00:13:24.969119 systemd-networkd[1688]: lo: Link UP May 17 00:13:24.957084 ignition[1584]: no config URL provided May 17 00:13:24.969123 systemd-networkd[1688]: lo: Gained carrier May 17 00:13:24.957089 ignition[1584]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:13:24.972743 systemd-networkd[1688]: Enumeration completed May 17 00:13:24.957141 ignition[1584]: parsing config with SHA512: d7bfd0dda6488b5d259645eb48211a45d6daff5b8649cccd5d9fc83d5c9060f61f5e3189b613b0e901025e9f77c340e1da95c211288dfdcb0aab9e852808229a May 17 00:13:24.973879 systemd-networkd[1688]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:24.966081 ignition[1584]: fetch-offline: fetch-offline passed May 17 00:13:24.978613 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:13:24.966086 ignition[1584]: POST message to Packet Timeline May 17 00:13:24.988875 systemd[1]: Reached target network.target - Network. May 17 00:13:24.966091 ignition[1584]: POST Status error: resource requires networking May 17 00:13:24.998575 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:13:24.966164 ignition[1584]: Ignition finished successfully May 17 00:13:25.012165 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:13:25.036041 ignition[1706]: Ignition 2.19.0 May 17 00:13:25.025426 systemd-networkd[1688]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:25.036048 ignition[1706]: Stage: kargs May 17 00:13:25.076503 systemd-networkd[1688]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:25.036286 ignition[1706]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:25.036295 ignition[1706]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:25.037310 ignition[1706]: kargs: kargs passed May 17 00:13:25.037314 ignition[1706]: POST message to Packet Timeline May 17 00:13:25.037327 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:25.040341 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46938->[::1]:53: read: connection refused May 17 00:13:25.240473 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:13:25.240890 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60609->[::1]:53: read: connection refused May 17 00:13:25.641900 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:13:25.651644 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 00:13:25.649830 systemd-networkd[1688]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:25.642278 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50796->[::1]:53: read: connection refused May 17 00:13:26.267001 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 00:13:26.269837 systemd-networkd[1688]: eno1: Link UP May 17 00:13:26.270049 systemd-networkd[1688]: eno2: Link UP May 17 00:13:26.270175 systemd-networkd[1688]: enP1p1s0f0np0: Link UP May 17 00:13:26.270320 systemd-networkd[1688]: enP1p1s0f0np0: Gained carrier May 17 00:13:26.281147 systemd-networkd[1688]: enP1p1s0f1np1: Link UP May 17 00:13:26.315021 systemd-networkd[1688]: enP1p1s0f0np0: DHCPv4 address 147.28.129.25/31, gateway 147.28.129.24 acquired from 147.28.144.140 May 17 00:13:26.442579 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:13:26.442976 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39326->[::1]:53: read: connection refused May 17 00:13:26.656252 systemd-networkd[1688]: enP1p1s0f1np1: Gained carrier May 17 00:13:27.280069 systemd-networkd[1688]: enP1p1s0f0np0: Gained IPv6LL May 17 00:13:28.044227 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:13:28.044649 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38247->[::1]:53: read: connection refused May 17 00:13:28.304054 systemd-networkd[1688]: enP1p1s0f1np1: Gained IPv6LL May 17 00:13:31.249533 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:13:31.802583 ignition[1706]: GET result: OK May 17 00:13:32.142128 ignition[1706]: Ignition finished successfully May 17 00:13:32.145963 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:13:32.164112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:13:32.179657 ignition[1730]: Ignition 2.19.0 May 17 00:13:32.179664 ignition[1730]: Stage: disks May 17 00:13:32.179822 ignition[1730]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:32.179831 ignition[1730]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:32.180769 ignition[1730]: disks: disks passed May 17 00:13:32.180773 ignition[1730]: POST message to Packet Timeline May 17 00:13:32.180786 ignition[1730]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:32.782955 ignition[1730]: GET result: OK May 17 00:13:33.292107 ignition[1730]: Ignition finished successfully May 17 00:13:33.295043 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:13:33.300970 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:13:33.308451 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:13:33.316426 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:13:33.324936 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:13:33.333789 systemd[1]: Reached target basic.target - Basic System. May 17 00:13:33.356151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:13:33.371365 systemd-fsck[1752]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:13:33.375104 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:13:33.394081 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:13:33.458997 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:13:33.459218 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:13:33.469686 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:13:33.493068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:33.500993 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1762) May 17 00:13:33.501011 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:33.501022 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:33.501033 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:33.501995 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:33.502023 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:33.595069 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:13:33.601470 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:13:33.612734 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:13:33.636430 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:13:33.657898 coreos-metadata[1780]: May 17 00:13:33.654 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:33.636461 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:33.685341 coreos-metadata[1782]: May 17 00:13:33.654 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:33.648957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:33.663391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:13:33.695113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:13:33.728340 initrd-setup-root[1803]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:13:33.734597 initrd-setup-root[1811]: cut: /sysroot/etc/group: No such file or directory May 17 00:13:33.740858 initrd-setup-root[1819]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:13:33.747059 initrd-setup-root[1826]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:13:33.817798 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:13:33.841059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:13:33.871554 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:33.847334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:13:33.877823 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:13:33.892974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:13:33.904294 ignition[1900]: INFO : Ignition 2.19.0 May 17 00:13:33.904294 ignition[1900]: INFO : Stage: mount May 17 00:13:33.914805 ignition[1900]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:33.914805 ignition[1900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:33.914805 ignition[1900]: INFO : mount: mount passed May 17 00:13:33.914805 ignition[1900]: INFO : POST message to Packet Timeline May 17 00:13:33.914805 ignition[1900]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:34.165608 coreos-metadata[1780]: May 17 00:13:34.165 INFO Fetch successful May 17 00:13:34.201608 coreos-metadata[1782]: May 17 00:13:34.201 INFO Fetch successful May 17 00:13:34.210536 coreos-metadata[1780]: May 17 00:13:34.210 INFO wrote hostname ci-4081.3.3-n-02409cc2a5 to /sysroot/etc/hostname May 17 00:13:34.213695 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:34.250133 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:13:34.252018 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:13:34.427524 ignition[1900]: INFO : GET result: OK May 17 00:13:34.818728 ignition[1900]: INFO : Ignition finished successfully May 17 00:13:34.821088 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:13:34.840069 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:13:34.851917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:34.886870 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1924) May 17 00:13:34.886909 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:34.901087 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:34.913968 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:34.936608 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:34.936631 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:34.944708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:34.973445 ignition[1942]: INFO : Ignition 2.19.0 May 17 00:13:34.973445 ignition[1942]: INFO : Stage: files May 17 00:13:34.982791 ignition[1942]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:34.982791 ignition[1942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:34.982791 ignition[1942]: DEBUG : files: compiled without relabeling support, skipping May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:13:34.982791 ignition[1942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:13:34.982791 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 17 00:13:34.979000 unknown[1942]: wrote ssh authorized keys file for user: core May 17 00:13:35.778444 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:13:36.727254 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 17 00:13:37.004147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:13:37.261446 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:37.261446 ignition[1942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:37.286110 ignition[1942]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:37.286110 ignition[1942]: INFO : files: files passed May 17 00:13:37.286110 ignition[1942]: INFO : POST message to Packet Timeline May 17 00:13:37.286110 ignition[1942]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:37.854267 ignition[1942]: INFO : GET result: OK May 17 00:13:38.194577 ignition[1942]: INFO : Ignition finished successfully May 17 00:13:38.197175 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:13:38.214120 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:13:38.220871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:13:38.232839 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:13:38.232916 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:13:38.268089 initrd-setup-root-after-ignition[1984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:38.268089 initrd-setup-root-after-ignition[1984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:38.251072 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:38.314455 initrd-setup-root-after-ignition[1988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:38.263870 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:13:38.293139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:13:38.328622 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:13:38.328698 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:13:38.338767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:13:38.355028 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:13:38.366514 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:13:38.377174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:13:38.399792 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:38.422137 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:13:38.438608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:38.448010 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:38.459478 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:13:38.471002 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:13:38.471102 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:38.482735 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:13:38.493916 systemd[1]: Stopped target basic.target - Basic System. May 17 00:13:38.505307 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:13:38.516587 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:38.527760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:13:38.538931 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:13:38.550044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:13:38.561263 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:13:38.572488 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:13:38.589671 systemd[1]: Stopped target swap.target - Swaps. May 17 00:13:38.600925 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:13:38.601027 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:13:38.612450 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:38.623631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:38.634924 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:13:38.635517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:38.646251 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:13:38.646346 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:13:38.657739 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:13:38.657843 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:13:38.669160 systemd[1]: Stopped target paths.target - Path Units. May 17 00:13:38.680337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:13:38.680427 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:38.697519 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:13:38.709283 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:13:38.720859 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:13:38.720962 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:13:38.828317 ignition[2009]: INFO : Ignition 2.19.0 May 17 00:13:38.828317 ignition[2009]: INFO : Stage: umount May 17 00:13:38.828317 ignition[2009]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:38.828317 ignition[2009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:38.828317 ignition[2009]: INFO : umount: umount passed May 17 00:13:38.828317 ignition[2009]: INFO : POST message to Packet Timeline May 17 00:13:38.828317 ignition[2009]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:38.732560 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:13:38.732683 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:13:38.744284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:13:38.744372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:38.755934 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:13:38.756024 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:13:38.767622 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:13:38.767704 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:38.791109 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:13:38.797238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:13:38.797341 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:38.810804 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:13:38.822284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:13:38.822389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:38.834396 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:13:38.834482 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:13:38.848181 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:13:38.849052 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:13:38.849129 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:13:38.859030 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:13:38.859106 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:13:39.291703 ignition[2009]: INFO : GET result: OK May 17 00:13:39.697944 ignition[2009]: INFO : Ignition finished successfully May 17 00:13:39.701393 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:13:39.701592 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:13:39.708314 systemd[1]: Stopped target network.target - Network. May 17 00:13:39.717500 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:13:39.717560 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:13:39.727152 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:13:39.727183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:13:39.736695 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:13:39.736743 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:13:39.746263 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:13:39.746299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:13:39.756003 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:13:39.756033 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:13:39.766007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:13:39.775585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:13:39.776009 systemd-networkd[1688]: enP1p1s0f1np1: DHCPv6 lease lost May 17 00:13:39.785534 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:13:39.785656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:13:39.788145 systemd-networkd[1688]: enP1p1s0f0np0: DHCPv6 lease lost May 17 00:13:39.797766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:13:39.797893 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:39.805907 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:13:39.806050 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:13:39.816202 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:13:39.816351 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:39.838127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:13:39.844999 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:13:39.845061 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:13:39.855125 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:13:39.855156 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:39.865269 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:13:39.865297 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:13:39.875674 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:39.899409 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:13:39.899515 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:39.909332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:13:39.909513 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:13:39.918528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:13:39.918581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:39.929263 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:13:39.929299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:13:39.940354 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:13:39.940390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:13:39.951004 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:13:39.951056 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:39.972136 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:13:39.984236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:13:39.984299 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:39.995481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:13:39.995512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:40.007277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:13:40.007347 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:13:40.562508 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:13:40.562612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:13:40.573893 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:13:40.596093 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:13:40.605713 systemd[1]: Switching root. May 17 00:13:40.661353 systemd-journald[899]: Journal stopped May 17 00:13:21.191680 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 00:13:21.191702 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:13:21.191711 kernel: KASLR enabled May 17 00:13:21.191717 kernel: efi: EFI v2.7 by American Megatrends May 17 00:13:21.191723 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea468818 RNG=0xebf00018 MEMRESERVE=0xe45d2f98 May 17 00:13:21.191728 kernel: random: crng init done May 17 00:13:21.191736 kernel: esrt: Reserving ESRT space from 0x00000000ea468818 to 0x00000000ea468878. May 17 00:13:21.191742 kernel: ACPI: Early table checksum verification disabled May 17 00:13:21.191749 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 00:13:21.191755 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 00:13:21.191762 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 00:13:21.191768 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 00:13:21.191774 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 00:13:21.191780 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 00:13:21.191789 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 00:13:21.191795 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:13:21.191802 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 00:13:21.191809 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:13:21.191815 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 00:13:21.191821 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191828 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191834 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191841 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 00:13:21.191849 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 00:13:21.191855 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 00:13:21.191861 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:13:21.191868 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 00:13:21.191874 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 00:13:21.191881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 00:13:21.191887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 00:13:21.191893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 00:13:21.191900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 00:13:21.191906 kernel: NUMA: NODE_DATA [mem 0x83fdffc9800-0x83fdffcefff] May 17 00:13:21.191912 kernel: Zone ranges: May 17 00:13:21.191919 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 00:13:21.191926 kernel: DMA32 empty May 17 00:13:21.191932 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 00:13:21.191939 kernel: Movable zone start for each node May 17 00:13:21.191945 kernel: Early memory node ranges May 17 00:13:21.191952 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 00:13:21.191961 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 00:13:21.191968 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 00:13:21.191976 kernel: node 0: [mem 0x0000000094000000-0x00000000eba2dfff] May 17 00:13:21.191983 kernel: node 0: [mem 0x00000000eba2e000-0x00000000ebeaffff] May 17 00:13:21.192049 kernel: node 0: [mem 0x00000000ebeb0000-0x00000000ebeb9fff] May 17 00:13:21.192056 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] May 17 00:13:21.192063 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 00:13:21.192070 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 00:13:21.192076 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 00:13:21.192083 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 00:13:21.192090 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 00:13:21.192097 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 00:13:21.192106 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 00:13:21.192113 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 00:13:21.192119 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 00:13:21.192126 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 00:13:21.192133 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 00:13:21.192140 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 00:13:21.192146 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 00:13:21.192153 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 00:13:21.192160 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 00:13:21.192167 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 00:13:21.192173 kernel: psci: probing for conduit method from ACPI. May 17 00:13:21.192181 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:13:21.192188 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:13:21.192195 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:13:21.192202 kernel: psci: SMC Calling Convention v1.2 May 17 00:13:21.192208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 00:13:21.192215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 00:13:21.192222 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 00:13:21.192229 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 00:13:21.192235 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 00:13:21.192242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 00:13:21.192249 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 00:13:21.192255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 00:13:21.192263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 00:13:21.192270 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 00:13:21.192276 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 00:13:21.192283 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 00:13:21.192289 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 00:13:21.192296 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 00:13:21.192303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 00:13:21.192309 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 00:13:21.192316 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 00:13:21.192323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 00:13:21.192329 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 00:13:21.192336 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 00:13:21.192344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 00:13:21.192351 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 00:13:21.192357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 00:13:21.192364 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 00:13:21.192371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 00:13:21.192378 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 00:13:21.192384 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 00:13:21.192391 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 00:13:21.192398 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 00:13:21.192404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 00:13:21.192411 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 00:13:21.192419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 00:13:21.192425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 00:13:21.192432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 00:13:21.192439 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 00:13:21.192446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 00:13:21.192452 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 00:13:21.192459 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 00:13:21.192466 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 00:13:21.192473 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 00:13:21.192480 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 00:13:21.192486 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 00:13:21.192493 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 00:13:21.192501 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 00:13:21.192508 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 00:13:21.192514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 00:13:21.192521 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 00:13:21.192528 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 00:13:21.192535 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 00:13:21.192541 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 00:13:21.192548 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 00:13:21.192561 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 00:13:21.192568 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 00:13:21.192576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 00:13:21.192584 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 00:13:21.192591 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 00:13:21.192598 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 00:13:21.192605 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 00:13:21.192612 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 00:13:21.192620 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 00:13:21.192628 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 00:13:21.192635 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 00:13:21.192642 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 00:13:21.192649 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 00:13:21.192656 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 00:13:21.192663 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 00:13:21.192670 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 00:13:21.192677 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 00:13:21.192684 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 00:13:21.192692 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 00:13:21.192699 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 00:13:21.192707 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 00:13:21.192714 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 00:13:21.192721 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 00:13:21.192728 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 00:13:21.192736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 00:13:21.192743 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 00:13:21.192750 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 00:13:21.192757 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 00:13:21.192764 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 00:13:21.192771 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:13:21.192778 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:13:21.192787 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 00:13:21.192794 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 00:13:21.192802 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 00:13:21.192809 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 00:13:21.192816 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 00:13:21.192823 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 00:13:21.192831 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 00:13:21.192838 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 00:13:21.192845 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 00:13:21.192852 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 00:13:21.192859 kernel: Detected PIPT I-cache on CPU0 May 17 00:13:21.192867 kernel: CPU features: detected: GIC system register CPU interface May 17 00:13:21.192875 kernel: CPU features: detected: Virtualization Host Extensions May 17 00:13:21.192882 kernel: CPU features: detected: Hardware dirty bit management May 17 00:13:21.192889 kernel: CPU features: detected: Spectre-v4 May 17 00:13:21.192896 kernel: CPU features: detected: Spectre-BHB May 17 00:13:21.192903 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:13:21.192911 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:13:21.192918 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:13:21.192925 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:13:21.192932 kernel: alternatives: applying boot alternatives May 17 00:13:21.192941 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:13:21.192950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:13:21.192957 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 00:13:21.192964 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 00:13:21.192971 kernel: printk: log_buf_len min size: 262144 bytes May 17 00:13:21.192978 kernel: printk: log_buf_len: 1048576 bytes May 17 00:13:21.192985 kernel: printk: early log buf free: 249904(95%) May 17 00:13:21.192995 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 00:13:21.193003 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 00:13:21.193010 kernel: Fallback order for Node 0: 0 May 17 00:13:21.193017 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 00:13:21.193024 kernel: Policy zone: Normal May 17 00:13:21.193033 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:13:21.193040 kernel: software IO TLB: area num 128. May 17 00:13:21.193048 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 00:13:21.193055 kernel: Memory: 262922448K/268174336K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 5251888K reserved, 0K cma-reserved) May 17 00:13:21.193063 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 00:13:21.193070 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:13:21.193077 kernel: rcu: RCU event tracing is enabled. May 17 00:13:21.193085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 00:13:21.193092 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:13:21.193099 kernel: Tracing variant of Tasks RCU enabled. May 17 00:13:21.193107 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:13:21.193115 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 00:13:21.193123 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:13:21.193130 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 00:13:21.193137 kernel: GICv3: 672 SPIs implemented May 17 00:13:21.193144 kernel: GICv3: 0 Extended SPIs implemented May 17 00:13:21.193151 kernel: Root IRQ handler: gic_handle_irq May 17 00:13:21.193158 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:13:21.193165 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 00:13:21.193173 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 00:13:21.193180 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 00:13:21.193187 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 00:13:21.193194 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 00:13:21.193201 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 00:13:21.193210 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 00:13:21.193217 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 00:13:21.193224 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 00:13:21.193231 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 00:13:21.193238 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193246 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193253 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 00:13:21.193260 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193267 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193275 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 00:13:21.193282 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193291 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193298 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 00:13:21.193305 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193312 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193320 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 00:13:21.193327 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193334 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193341 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 00:13:21.193349 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193356 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193363 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 00:13:21.193372 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193380 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193387 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 00:13:21.193394 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:13:21.193401 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 17 00:13:21.193409 kernel: GICv3: using LPI property table @0x00000800003e0000 May 17 00:13:21.193416 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 17 00:13:21.193423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:13:21.193430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193438 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 00:13:21.193445 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 00:13:21.193453 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:13:21.193461 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:13:21.193468 kernel: Console: colour dummy device 80x25 May 17 00:13:21.193475 kernel: printk: console [tty0] enabled May 17 00:13:21.193483 kernel: ACPI: Core revision 20230628 May 17 00:13:21.193490 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:13:21.193497 kernel: pid_max: default: 81920 minimum: 640 May 17 00:13:21.193505 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:13:21.193512 kernel: landlock: Up and running. May 17 00:13:21.193519 kernel: SELinux: Initializing. May 17 00:13:21.193528 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:13:21.193536 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:13:21.193543 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:13:21.193551 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:13:21.193558 kernel: rcu: Hierarchical SRCU implementation. May 17 00:13:21.193566 kernel: rcu: Max phase no-delay instances is 400. May 17 00:13:21.193573 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 00:13:21.193580 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 00:13:21.193588 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 00:13:21.193596 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 00:13:21.193604 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 00:13:21.193611 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 00:13:21.193618 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 00:13:21.193625 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 00:13:21.193633 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 00:13:21.193640 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 00:13:21.193647 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 00:13:21.193655 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 00:13:21.193663 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 00:13:21.193670 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 00:13:21.193678 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 00:13:21.193685 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 00:13:21.193692 kernel: Remapping and enabling EFI services. May 17 00:13:21.193700 kernel: smp: Bringing up secondary CPUs ... May 17 00:13:21.193707 kernel: Detected PIPT I-cache on CPU1 May 17 00:13:21.193714 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 00:13:21.193722 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 17 00:13:21.193731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193738 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 00:13:21.193745 kernel: Detected PIPT I-cache on CPU2 May 17 00:13:21.193753 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 00:13:21.193760 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 17 00:13:21.193767 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193775 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 00:13:21.193782 kernel: Detected PIPT I-cache on CPU3 May 17 00:13:21.193789 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 00:13:21.193797 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 17 00:13:21.193805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193812 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 00:13:21.193820 kernel: Detected PIPT I-cache on CPU4 May 17 00:13:21.193827 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 00:13:21.193834 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 17 00:13:21.193842 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193849 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 00:13:21.193856 kernel: Detected PIPT I-cache on CPU5 May 17 00:13:21.193863 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 00:13:21.193872 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 17 00:13:21.193880 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193887 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 00:13:21.193894 kernel: Detected PIPT I-cache on CPU6 May 17 00:13:21.193901 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 00:13:21.193909 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 17 00:13:21.193916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193923 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 00:13:21.193930 kernel: Detected PIPT I-cache on CPU7 May 17 00:13:21.193938 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 00:13:21.193946 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 17 00:13:21.193954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.193961 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 00:13:21.193968 kernel: Detected PIPT I-cache on CPU8 May 17 00:13:21.193976 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 00:13:21.193983 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 17 00:13:21.193992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194000 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 00:13:21.194007 kernel: Detected PIPT I-cache on CPU9 May 17 00:13:21.194014 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 00:13:21.194023 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 17 00:13:21.194031 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194038 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 00:13:21.194045 kernel: Detected PIPT I-cache on CPU10 May 17 00:13:21.194053 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 00:13:21.194060 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 17 00:13:21.194067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194075 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 00:13:21.194082 kernel: Detected PIPT I-cache on CPU11 May 17 00:13:21.194091 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 00:13:21.194098 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 17 00:13:21.194105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194112 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 00:13:21.194120 kernel: Detected PIPT I-cache on CPU12 May 17 00:13:21.194127 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 00:13:21.194134 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 17 00:13:21.194141 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194148 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 00:13:21.194156 kernel: Detected PIPT I-cache on CPU13 May 17 00:13:21.194164 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 00:13:21.194172 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 17 00:13:21.194179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194187 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 00:13:21.194194 kernel: Detected PIPT I-cache on CPU14 May 17 00:13:21.194201 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 00:13:21.194208 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 17 00:13:21.194216 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194223 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 00:13:21.194232 kernel: Detected PIPT I-cache on CPU15 May 17 00:13:21.194239 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 00:13:21.194246 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 17 00:13:21.194254 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194261 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 00:13:21.194269 kernel: Detected PIPT I-cache on CPU16 May 17 00:13:21.194276 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 00:13:21.194283 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 17 00:13:21.194291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194308 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 00:13:21.194317 kernel: Detected PIPT I-cache on CPU17 May 17 00:13:21.194324 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 00:13:21.194332 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 17 00:13:21.194340 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194347 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 00:13:21.194355 kernel: Detected PIPT I-cache on CPU18 May 17 00:13:21.194362 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 00:13:21.194370 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 17 00:13:21.194379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194387 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 00:13:21.194395 kernel: Detected PIPT I-cache on CPU19 May 17 00:13:21.194402 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 00:13:21.194410 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 17 00:13:21.194418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194425 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 00:13:21.194435 kernel: Detected PIPT I-cache on CPU20 May 17 00:13:21.194443 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 00:13:21.194451 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 17 00:13:21.194459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194466 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 00:13:21.194474 kernel: Detected PIPT I-cache on CPU21 May 17 00:13:21.194482 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 00:13:21.194490 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 17 00:13:21.194497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194506 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 00:13:21.194514 kernel: Detected PIPT I-cache on CPU22 May 17 00:13:21.194523 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 00:13:21.194531 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 17 00:13:21.194538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194546 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 00:13:21.194554 kernel: Detected PIPT I-cache on CPU23 May 17 00:13:21.194561 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 00:13:21.194569 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 17 00:13:21.194578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194586 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 00:13:21.194593 kernel: Detected PIPT I-cache on CPU24 May 17 00:13:21.194601 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 00:13:21.194609 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 17 00:13:21.194617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194624 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 00:13:21.194632 kernel: Detected PIPT I-cache on CPU25 May 17 00:13:21.194640 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 00:13:21.194648 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 17 00:13:21.194656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194664 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 00:13:21.194673 kernel: Detected PIPT I-cache on CPU26 May 17 00:13:21.194681 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 00:13:21.194689 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 17 00:13:21.194697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194704 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 00:13:21.194712 kernel: Detected PIPT I-cache on CPU27 May 17 00:13:21.194720 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 00:13:21.194729 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 17 00:13:21.194736 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194744 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 00:13:21.194752 kernel: Detected PIPT I-cache on CPU28 May 17 00:13:21.194759 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 00:13:21.194767 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 17 00:13:21.194775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194782 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 00:13:21.194790 kernel: Detected PIPT I-cache on CPU29 May 17 00:13:21.194798 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 00:13:21.194807 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 17 00:13:21.194815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194823 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 00:13:21.194830 kernel: Detected PIPT I-cache on CPU30 May 17 00:13:21.194838 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 00:13:21.194846 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 17 00:13:21.194854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194862 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 00:13:21.194869 kernel: Detected PIPT I-cache on CPU31 May 17 00:13:21.194878 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 00:13:21.194886 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 17 00:13:21.194894 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194902 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 00:13:21.194909 kernel: Detected PIPT I-cache on CPU32 May 17 00:13:21.194917 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 00:13:21.194924 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 17 00:13:21.194932 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194940 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 00:13:21.194948 kernel: Detected PIPT I-cache on CPU33 May 17 00:13:21.194957 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 00:13:21.194964 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 17 00:13:21.194972 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.194980 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 00:13:21.194989 kernel: Detected PIPT I-cache on CPU34 May 17 00:13:21.194997 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 00:13:21.195005 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 17 00:13:21.195013 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195020 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 00:13:21.195030 kernel: Detected PIPT I-cache on CPU35 May 17 00:13:21.195038 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 00:13:21.195046 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 17 00:13:21.195053 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195061 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 00:13:21.195069 kernel: Detected PIPT I-cache on CPU36 May 17 00:13:21.195076 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 00:13:21.195084 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 17 00:13:21.195092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195099 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 00:13:21.195108 kernel: Detected PIPT I-cache on CPU37 May 17 00:13:21.195116 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 00:13:21.195124 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 17 00:13:21.195131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195139 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 00:13:21.195146 kernel: Detected PIPT I-cache on CPU38 May 17 00:13:21.195154 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 00:13:21.195162 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 17 00:13:21.195170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195179 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 00:13:21.195186 kernel: Detected PIPT I-cache on CPU39 May 17 00:13:21.195194 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 00:13:21.195203 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 17 00:13:21.195211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195218 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 00:13:21.195226 kernel: Detected PIPT I-cache on CPU40 May 17 00:13:21.195234 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 00:13:21.195243 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 17 00:13:21.195250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195258 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 00:13:21.195266 kernel: Detected PIPT I-cache on CPU41 May 17 00:13:21.195274 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 00:13:21.195281 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 17 00:13:21.195289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195297 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 00:13:21.195304 kernel: Detected PIPT I-cache on CPU42 May 17 00:13:21.195313 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 00:13:21.195321 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 17 00:13:21.195329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195336 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 00:13:21.195344 kernel: Detected PIPT I-cache on CPU43 May 17 00:13:21.195352 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 00:13:21.195359 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 17 00:13:21.195367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195375 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 00:13:21.195382 kernel: Detected PIPT I-cache on CPU44 May 17 00:13:21.195391 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 00:13:21.195399 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 17 00:13:21.195407 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195415 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 00:13:21.195422 kernel: Detected PIPT I-cache on CPU45 May 17 00:13:21.195430 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 00:13:21.195437 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 17 00:13:21.195445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195453 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 00:13:21.195462 kernel: Detected PIPT I-cache on CPU46 May 17 00:13:21.195469 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 00:13:21.195477 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 17 00:13:21.195485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195492 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 00:13:21.195500 kernel: Detected PIPT I-cache on CPU47 May 17 00:13:21.195508 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 00:13:21.195515 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 17 00:13:21.195523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195531 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 00:13:21.195539 kernel: Detected PIPT I-cache on CPU48 May 17 00:13:21.195547 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 00:13:21.195555 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 17 00:13:21.195563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195570 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 00:13:21.195578 kernel: Detected PIPT I-cache on CPU49 May 17 00:13:21.195586 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 00:13:21.195593 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 17 00:13:21.195601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195610 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 00:13:21.195618 kernel: Detected PIPT I-cache on CPU50 May 17 00:13:21.195625 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 00:13:21.195633 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 17 00:13:21.195641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195649 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 00:13:21.195656 kernel: Detected PIPT I-cache on CPU51 May 17 00:13:21.195665 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 00:13:21.195673 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 17 00:13:21.195682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195690 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 00:13:21.195697 kernel: Detected PIPT I-cache on CPU52 May 17 00:13:21.195705 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 00:13:21.195713 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 17 00:13:21.195720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195728 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 00:13:21.195736 kernel: Detected PIPT I-cache on CPU53 May 17 00:13:21.195743 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 00:13:21.195751 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 17 00:13:21.195761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195768 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 00:13:21.195776 kernel: Detected PIPT I-cache on CPU54 May 17 00:13:21.195784 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 00:13:21.195791 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 17 00:13:21.195799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195807 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 00:13:21.195814 kernel: Detected PIPT I-cache on CPU55 May 17 00:13:21.195822 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 00:13:21.195831 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 17 00:13:21.195839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195847 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 00:13:21.195854 kernel: Detected PIPT I-cache on CPU56 May 17 00:13:21.195862 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 00:13:21.195870 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 17 00:13:21.195878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195885 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 00:13:21.195893 kernel: Detected PIPT I-cache on CPU57 May 17 00:13:21.195901 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 00:13:21.195910 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 17 00:13:21.195918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195925 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 00:13:21.195933 kernel: Detected PIPT I-cache on CPU58 May 17 00:13:21.195940 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 00:13:21.195948 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 17 00:13:21.195956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.195964 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 00:13:21.195971 kernel: Detected PIPT I-cache on CPU59 May 17 00:13:21.195980 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 00:13:21.195996 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 17 00:13:21.196004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196012 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 00:13:21.196020 kernel: Detected PIPT I-cache on CPU60 May 17 00:13:21.196028 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 00:13:21.196036 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 17 00:13:21.196044 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196051 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 00:13:21.196059 kernel: Detected PIPT I-cache on CPU61 May 17 00:13:21.196069 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 00:13:21.196076 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 17 00:13:21.196084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196092 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 00:13:21.196100 kernel: Detected PIPT I-cache on CPU62 May 17 00:13:21.196107 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 00:13:21.196115 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 17 00:13:21.196123 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196130 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 00:13:21.196139 kernel: Detected PIPT I-cache on CPU63 May 17 00:13:21.196147 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 00:13:21.196155 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 17 00:13:21.196163 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196170 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 00:13:21.196178 kernel: Detected PIPT I-cache on CPU64 May 17 00:13:21.196186 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 00:13:21.196194 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 17 00:13:21.196202 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196209 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 00:13:21.196218 kernel: Detected PIPT I-cache on CPU65 May 17 00:13:21.196226 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 00:13:21.196234 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 17 00:13:21.196242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196249 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 00:13:21.196257 kernel: Detected PIPT I-cache on CPU66 May 17 00:13:21.196264 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 00:13:21.196272 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 17 00:13:21.196280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196289 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 00:13:21.196297 kernel: Detected PIPT I-cache on CPU67 May 17 00:13:21.196304 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 00:13:21.196312 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 17 00:13:21.196320 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196327 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 00:13:21.196335 kernel: Detected PIPT I-cache on CPU68 May 17 00:13:21.196343 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 00:13:21.196351 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 17 00:13:21.196360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196367 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 00:13:21.196375 kernel: Detected PIPT I-cache on CPU69 May 17 00:13:21.196383 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 00:13:21.196391 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 17 00:13:21.196398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196406 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 00:13:21.196414 kernel: Detected PIPT I-cache on CPU70 May 17 00:13:21.196421 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 00:13:21.196429 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 17 00:13:21.196438 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196446 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 00:13:21.196453 kernel: Detected PIPT I-cache on CPU71 May 17 00:13:21.196461 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 00:13:21.196469 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 17 00:13:21.196476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196484 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 00:13:21.196492 kernel: Detected PIPT I-cache on CPU72 May 17 00:13:21.196500 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 00:13:21.196509 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 17 00:13:21.196517 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196524 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 00:13:21.196532 kernel: Detected PIPT I-cache on CPU73 May 17 00:13:21.196539 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 00:13:21.196547 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 17 00:13:21.196555 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196563 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 00:13:21.196570 kernel: Detected PIPT I-cache on CPU74 May 17 00:13:21.196578 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 00:13:21.196587 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 17 00:13:21.196595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196603 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 00:13:21.196610 kernel: Detected PIPT I-cache on CPU75 May 17 00:13:21.196618 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 00:13:21.196626 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 17 00:13:21.196633 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196641 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 00:13:21.196649 kernel: Detected PIPT I-cache on CPU76 May 17 00:13:21.196658 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 00:13:21.196666 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 17 00:13:21.196673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196681 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 00:13:21.196689 kernel: Detected PIPT I-cache on CPU77 May 17 00:13:21.196696 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 00:13:21.196704 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 17 00:13:21.196712 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196719 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 00:13:21.196727 kernel: Detected PIPT I-cache on CPU78 May 17 00:13:21.196736 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 00:13:21.196744 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 17 00:13:21.196751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196759 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 00:13:21.196767 kernel: Detected PIPT I-cache on CPU79 May 17 00:13:21.196774 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 00:13:21.196782 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 17 00:13:21.196790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:13:21.196797 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 00:13:21.196806 kernel: smp: Brought up 1 node, 80 CPUs May 17 00:13:21.196814 kernel: SMP: Total of 80 processors activated. May 17 00:13:21.196822 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:13:21.196829 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:13:21.196837 kernel: CPU features: detected: Common not Private translations May 17 00:13:21.196845 kernel: CPU features: detected: CRC32 instructions May 17 00:13:21.196853 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:13:21.196860 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:13:21.196868 kernel: CPU features: detected: LSE atomic instructions May 17 00:13:21.196877 kernel: CPU features: detected: Privileged Access Never May 17 00:13:21.196884 kernel: CPU features: detected: RAS Extension Support May 17 00:13:21.196892 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:13:21.196900 kernel: CPU: All CPU(s) started at EL2 May 17 00:13:21.196907 kernel: alternatives: applying system-wide alternatives May 17 00:13:21.196915 kernel: devtmpfs: initialized May 17 00:13:21.196923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:13:21.196930 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 00:13:21.196938 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:13:21.196947 kernel: SMBIOS 3.4.0 present. May 17 00:13:21.196955 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 00:13:21.196963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:13:21.196971 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 00:13:21.196978 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:13:21.196986 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:13:21.196997 kernel: audit: initializing netlink subsys (disabled) May 17 00:13:21.197004 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 17 00:13:21.197012 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:13:21.197021 kernel: cpuidle: using governor menu May 17 00:13:21.197029 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:13:21.197036 kernel: ASID allocator initialised with 32768 entries May 17 00:13:21.197044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:13:21.197052 kernel: Serial: AMBA PL011 UART driver May 17 00:13:21.197059 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:13:21.197067 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:13:21.197075 kernel: Modules: 509024 pages in range for PLT usage May 17 00:13:21.197083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:13:21.197092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:13:21.197099 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:13:21.197107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:13:21.197115 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:13:21.197123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:13:21.197130 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:13:21.197138 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:13:21.197146 kernel: ACPI: Added _OSI(Module Device) May 17 00:13:21.197153 kernel: ACPI: Added _OSI(Processor Device) May 17 00:13:21.197162 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:13:21.197170 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:13:21.197178 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 00:13:21.197185 kernel: ACPI: Interpreter enabled May 17 00:13:21.197193 kernel: ACPI: Using GIC for interrupt routing May 17 00:13:21.197200 kernel: ACPI: MCFG table detected, 8 entries May 17 00:13:21.197208 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197216 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197224 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197233 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197241 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197249 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197256 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197264 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 00:13:21.197272 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 00:13:21.197280 kernel: printk: console [ttyAMA0] enabled May 17 00:13:21.197288 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 00:13:21.197296 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 00:13:21.197423 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.197498 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.197564 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.197627 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.197689 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.197751 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 00:13:21.197764 kernel: PCI host bridge to bus 000d:00 May 17 00:13:21.197837 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 00:13:21.197895 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 00:13:21.197952 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 00:13:21.198036 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.198116 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.198183 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 00:13:21.198252 kernel: pci 000d:00:01.0: supports D1 D2 May 17 00:13:21.198319 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198393 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.198460 kernel: pci 000d:00:02.0: supports D1 D2 May 17 00:13:21.198525 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198598 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.198666 kernel: pci 000d:00:03.0: supports D1 D2 May 17 00:13:21.198733 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198805 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.198872 kernel: pci 000d:00:04.0: supports D1 D2 May 17 00:13:21.198938 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.198948 kernel: acpiphp: Slot [1] registered May 17 00:13:21.198956 kernel: acpiphp: Slot [2] registered May 17 00:13:21.198964 kernel: acpiphp: Slot [3] registered May 17 00:13:21.198974 kernel: acpiphp: Slot [4] registered May 17 00:13:21.199037 kernel: pci_bus 000d:00: on NUMA node 0 May 17 00:13:21.199105 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.199172 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.199238 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.199305 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.199369 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.199438 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.199505 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.199574 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.199640 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.199706 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.199771 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.199836 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.199905 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 00:13:21.199970 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:13:21.200039 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 00:13:21.200105 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:13:21.200171 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 00:13:21.200236 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:13:21.200302 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 00:13:21.200367 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:13:21.200435 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200499 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200564 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200630 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200694 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200759 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200824 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.200891 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.200956 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201027 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201092 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201158 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201223 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201288 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201353 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.201419 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.201485 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.201551 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 00:13:21.201617 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:13:21.201683 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.201748 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 00:13:21.201813 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:13:21.201882 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.201946 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 00:13:21.202016 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:13:21.202080 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.202146 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 00:13:21.202211 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:13:21.202274 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 00:13:21.202331 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 00:13:21.202402 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 00:13:21.202462 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:13:21.202533 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 00:13:21.202594 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:13:21.202674 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 00:13:21.202735 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:13:21.202803 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 00:13:21.202864 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:13:21.202874 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 00:13:21.202944 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.203016 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.203079 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.203142 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.203204 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 00:13:21.203268 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 00:13:21.203278 kernel: PCI host bridge to bus 0000:00 May 17 00:13:21.203343 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 00:13:21.203407 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:13:21.203464 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:13:21.203540 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.203613 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.203679 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 00:13:21.203744 kernel: pci 0000:00:01.0: supports D1 D2 May 17 00:13:21.203808 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.203885 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.203950 kernel: pci 0000:00:02.0: supports D1 D2 May 17 00:13:21.204019 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.204092 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.204159 kernel: pci 0000:00:03.0: supports D1 D2 May 17 00:13:21.204223 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.204296 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.204364 kernel: pci 0000:00:04.0: supports D1 D2 May 17 00:13:21.204430 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.204440 kernel: acpiphp: Slot [1-1] registered May 17 00:13:21.204448 kernel: acpiphp: Slot [2-1] registered May 17 00:13:21.204456 kernel: acpiphp: Slot [3-1] registered May 17 00:13:21.204464 kernel: acpiphp: Slot [4-1] registered May 17 00:13:21.204519 kernel: pci_bus 0000:00: on NUMA node 0 May 17 00:13:21.204585 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.204649 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.204717 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.204781 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.204846 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.204912 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.204977 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.205046 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.205113 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.205180 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.205244 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.205309 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.205374 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 00:13:21.205440 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:13:21.205505 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 00:13:21.205573 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:13:21.205638 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 00:13:21.205703 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:13:21.205768 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 00:13:21.205834 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:13:21.205897 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.205963 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206032 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206098 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206163 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206227 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206293 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206356 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206422 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206485 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206550 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206615 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206682 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206746 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206812 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.206876 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.206940 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.207009 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 00:13:21.207075 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:13:21.207140 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.207207 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 00:13:21.207275 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:13:21.207341 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.207409 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 00:13:21.207474 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:13:21.207540 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.207604 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 00:13:21.207670 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:13:21.207730 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 00:13:21.207790 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:13:21.207860 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 00:13:21.207923 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:13:21.207993 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 00:13:21.208055 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:13:21.208130 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 00:13:21.208196 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:13:21.208265 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 00:13:21.208325 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:13:21.208335 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 00:13:21.208406 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.208470 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.208534 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.208598 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.208660 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 00:13:21.208722 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 00:13:21.208733 kernel: PCI host bridge to bus 0005:00 May 17 00:13:21.208800 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 00:13:21.208857 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:13:21.208915 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 00:13:21.208991 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.209067 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.209134 kernel: pci 0005:00:01.0: supports D1 D2 May 17 00:13:21.209199 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209274 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.209340 kernel: pci 0005:00:03.0: supports D1 D2 May 17 00:13:21.209409 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209481 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.209547 kernel: pci 0005:00:05.0: supports D1 D2 May 17 00:13:21.209612 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209687 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:13:21.209754 kernel: pci 0005:00:07.0: supports D1 D2 May 17 00:13:21.209820 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 00:13:21.209832 kernel: acpiphp: Slot [1-2] registered May 17 00:13:21.209841 kernel: acpiphp: Slot [2-2] registered May 17 00:13:21.209912 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 00:13:21.209983 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 00:13:21.210054 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 00:13:21.210129 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 00:13:21.210198 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 00:13:21.210267 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 00:13:21.210328 kernel: pci_bus 0005:00: on NUMA node 0 May 17 00:13:21.210393 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.210460 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.210544 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.210614 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.210680 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.210750 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.210815 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.210881 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.210947 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:13:21.211028 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.211096 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.211161 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 00:13:21.211234 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 00:13:21.211299 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:13:21.211366 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 00:13:21.211431 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:13:21.211497 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 00:13:21.211561 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:13:21.211627 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 00:13:21.211692 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:13:21.211759 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.211824 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.211891 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.211957 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212026 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212093 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212158 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212224 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212291 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212357 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212422 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212487 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212553 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212619 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212684 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.212749 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.212813 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.212880 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 00:13:21.212946 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:13:21.213018 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 00:13:21.213084 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 00:13:21.213151 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:13:21.213220 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 00:13:21.213290 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 00:13:21.213355 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 00:13:21.213419 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 00:13:21.213485 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:13:21.213552 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 00:13:21.213620 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 00:13:21.213684 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 00:13:21.213753 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 00:13:21.213819 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:13:21.213880 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 00:13:21.213938 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:13:21.214010 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 00:13:21.214073 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:13:21.214150 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 00:13:21.214212 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:13:21.214279 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 00:13:21.214342 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:13:21.214410 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 00:13:21.214474 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:13:21.214484 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 00:13:21.214554 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.214618 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.214691 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.214757 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.214821 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.214883 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 00:13:21.214896 kernel: PCI host bridge to bus 0003:00 May 17 00:13:21.214963 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 00:13:21.215025 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 00:13:21.215084 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 00:13:21.215161 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.215238 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.215310 kernel: pci 0003:00:01.0: supports D1 D2 May 17 00:13:21.215377 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.215456 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.215522 kernel: pci 0003:00:03.0: supports D1 D2 May 17 00:13:21.215589 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.215661 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.215727 kernel: pci 0003:00:05.0: supports D1 D2 May 17 00:13:21.215795 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.215806 kernel: acpiphp: Slot [1-3] registered May 17 00:13:21.215813 kernel: acpiphp: Slot [2-3] registered May 17 00:13:21.215885 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 00:13:21.215952 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 00:13:21.216025 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 00:13:21.216093 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 00:13:21.216160 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:13:21.216230 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 00:13:21.216296 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:13:21.216364 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 00:13:21.216431 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:13:21.216499 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 00:13:21.216574 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 00:13:21.216642 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 00:13:21.216709 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 00:13:21.216778 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 00:13:21.216846 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 00:13:21.216912 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 00:13:21.216980 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:13:21.217051 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 00:13:21.217121 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:13:21.217182 kernel: pci_bus 0003:00: on NUMA node 0 May 17 00:13:21.217250 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.217314 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.217380 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.217449 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.217515 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.217582 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.217651 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 00:13:21.217718 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 00:13:21.217782 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:13:21.217848 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:13:21.217925 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:13:21.217996 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:13:21.218062 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:13:21.218128 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:13:21.218196 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218262 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218327 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218393 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218458 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218524 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218589 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218654 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218721 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218786 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218851 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.218916 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.218982 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.219050 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:13:21.219116 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:13:21.219181 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 00:13:21.219249 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 00:13:21.219316 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:13:21.219387 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 00:13:21.219455 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 00:13:21.219523 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 00:13:21.219591 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 00:13:21.219660 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 00:13:21.219728 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 00:13:21.219795 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 00:13:21.219863 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 00:13:21.219929 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220000 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220068 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220138 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220206 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220275 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220343 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:13:21.220410 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:13:21.220476 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 00:13:21.220541 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 00:13:21.220610 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:13:21.220669 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:13:21.220728 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 00:13:21.220786 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 00:13:21.220864 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:13:21.220926 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:13:21.221237 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:13:21.221310 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:13:21.221378 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:13:21.221437 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:13:21.221449 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 00:13:21.221518 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.221581 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.221647 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.221709 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.221771 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.221832 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 00:13:21.221843 kernel: PCI host bridge to bus 000c:00 May 17 00:13:21.221908 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 00:13:21.221965 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 00:13:21.222032 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 00:13:21.222108 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.222185 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.222250 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 00:13:21.222315 kernel: pci 000c:00:01.0: supports D1 D2 May 17 00:13:21.222381 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.222454 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.222521 kernel: pci 000c:00:02.0: supports D1 D2 May 17 00:13:21.222586 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.222659 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.222724 kernel: pci 000c:00:03.0: supports D1 D2 May 17 00:13:21.222789 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.222859 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.222925 kernel: pci 000c:00:04.0: supports D1 D2 May 17 00:13:21.222994 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.223005 kernel: acpiphp: Slot [1-4] registered May 17 00:13:21.223013 kernel: acpiphp: Slot [2-4] registered May 17 00:13:21.223021 kernel: acpiphp: Slot [3-2] registered May 17 00:13:21.223030 kernel: acpiphp: Slot [4-2] registered May 17 00:13:21.223086 kernel: pci_bus 000c:00: on NUMA node 0 May 17 00:13:21.223150 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.223216 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.223282 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.223347 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.223411 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.223476 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.223540 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.223604 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.223668 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.223735 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.223799 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.223863 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.223928 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 00:13:21.223995 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:13:21.224062 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 00:13:21.224125 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:13:21.224193 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 00:13:21.224257 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:13:21.224321 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 00:13:21.224385 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:13:21.224450 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224514 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224578 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224642 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224709 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224772 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224837 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.224901 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.224964 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225032 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225095 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225160 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225226 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225290 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225354 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.225419 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.225482 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.225547 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 00:13:21.225610 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:13:21.225675 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.225741 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 00:13:21.225806 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:13:21.225871 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.225934 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 00:13:21.226002 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:13:21.226067 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.226135 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 00:13:21.226198 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:13:21.226258 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 00:13:21.226315 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 00:13:21.226384 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 00:13:21.226443 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:13:21.226519 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 00:13:21.226582 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:13:21.226649 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 00:13:21.226709 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:13:21.226776 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 00:13:21.226835 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:13:21.226846 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 00:13:21.226917 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.226980 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.227046 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.227108 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.227170 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 00:13:21.227231 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 00:13:21.227242 kernel: PCI host bridge to bus 0002:00 May 17 00:13:21.227309 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 00:13:21.227368 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 00:13:21.227424 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 00:13:21.227496 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.227567 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.227633 kernel: pci 0002:00:01.0: supports D1 D2 May 17 00:13:21.227701 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.227773 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.227839 kernel: pci 0002:00:03.0: supports D1 D2 May 17 00:13:21.227903 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.227975 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.228045 kernel: pci 0002:00:05.0: supports D1 D2 May 17 00:13:21.228110 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.228186 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:13:21.228251 kernel: pci 0002:00:07.0: supports D1 D2 May 17 00:13:21.228314 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 00:13:21.228325 kernel: acpiphp: Slot [1-5] registered May 17 00:13:21.228333 kernel: acpiphp: Slot [2-5] registered May 17 00:13:21.228341 kernel: acpiphp: Slot [3-3] registered May 17 00:13:21.228349 kernel: acpiphp: Slot [4-3] registered May 17 00:13:21.228406 kernel: pci_bus 0002:00: on NUMA node 0 May 17 00:13:21.228472 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.228541 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.228605 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:13:21.228674 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.228739 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.228806 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.228872 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.228936 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.229004 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.229071 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.229137 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.229202 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.229270 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 00:13:21.229335 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:13:21.229399 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 00:13:21.229463 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:13:21.229528 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 00:13:21.229592 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:13:21.229656 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 00:13:21.229722 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:13:21.229790 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.229858 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.229923 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.229990 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230056 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230121 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230196 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230266 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230330 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230396 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230460 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230526 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230590 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230654 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230718 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.230783 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.230850 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.230915 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 00:13:21.230982 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:13:21.231135 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 00:13:21.231201 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 00:13:21.231265 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:13:21.231329 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 00:13:21.231397 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 00:13:21.231461 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:13:21.231525 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 00:13:21.231589 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 00:13:21.231653 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:13:21.231713 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 00:13:21.231772 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 00:13:21.231841 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 00:13:21.231900 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:13:21.231967 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 00:13:21.232031 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:13:21.232105 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 00:13:21.232169 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:13:21.232236 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 00:13:21.232296 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:13:21.232307 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 00:13:21.232377 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.232441 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.232504 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.232568 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.232631 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 00:13:21.232693 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 00:13:21.232704 kernel: PCI host bridge to bus 0001:00 May 17 00:13:21.232768 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 00:13:21.232826 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 00:13:21.232886 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 00:13:21.232956 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:13:21.233034 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:13:21.233099 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 00:13:21.233165 kernel: pci 0001:00:01.0: supports D1 D2 May 17 00:13:21.233237 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233310 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:13:21.233378 kernel: pci 0001:00:02.0: supports D1 D2 May 17 00:13:21.233443 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233514 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:13:21.233579 kernel: pci 0001:00:03.0: supports D1 D2 May 17 00:13:21.233644 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233715 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:13:21.233784 kernel: pci 0001:00:04.0: supports D1 D2 May 17 00:13:21.233849 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 00:13:21.233860 kernel: acpiphp: Slot [1-6] registered May 17 00:13:21.233932 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:13:21.234006 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:13:21.234077 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 00:13:21.234142 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 00:13:21.234209 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:13:21.234285 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:13:21.234353 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:13:21.234420 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 00:13:21.234486 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 00:13:21.234497 kernel: acpiphp: Slot [2-6] registered May 17 00:13:21.234505 kernel: acpiphp: Slot [3-4] registered May 17 00:13:21.234513 kernel: acpiphp: Slot [4-4] registered May 17 00:13:21.234572 kernel: pci_bus 0001:00: on NUMA node 0 May 17 00:13:21.234637 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:13:21.234703 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:13:21.234767 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.234831 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:13:21.234896 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.234961 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.235128 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.235202 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.235268 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.235332 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.235396 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:13:21.235460 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 00:13:21.235525 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 00:13:21.235591 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:13:21.235655 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 00:13:21.235719 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:13:21.235782 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 00:13:21.235848 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:13:21.235912 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.235977 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236044 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236112 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236175 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236239 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236303 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236366 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236431 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236495 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236558 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236621 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236687 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236751 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236815 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.236878 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.236945 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:13:21.237016 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:13:21.237083 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 00:13:21.237150 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 00:13:21.237216 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 00:13:21.237281 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 00:13:21.237344 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:13:21.237409 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 00:13:21.237473 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 00:13:21.237537 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:13:21.237603 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.237668 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 00:13:21.237732 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:13:21.237797 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 00:13:21.237861 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 00:13:21.237926 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:13:21.237986 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 00:13:21.238048 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 00:13:21.238123 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 00:13:21.238184 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:13:21.238251 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 00:13:21.238310 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:13:21.238377 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 00:13:21.238439 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:13:21.238506 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 00:13:21.238566 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:13:21.238577 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 00:13:21.238646 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:13:21.238709 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:13:21.238774 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 00:13:21.238836 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:13:21.238899 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 00:13:21.238961 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 00:13:21.238972 kernel: PCI host bridge to bus 0004:00 May 17 00:13:21.239038 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 00:13:21.239100 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 00:13:21.239158 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 00:13:21.239231 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:13:21.239302 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:13:21.239368 kernel: pci 0004:00:01.0: supports D1 D2 May 17 00:13:21.239432 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 00:13:21.239505 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:13:21.239571 kernel: pci 0004:00:03.0: supports D1 D2 May 17 00:13:21.239638 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 00:13:21.239710 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:13:21.239776 kernel: pci 0004:00:05.0: supports D1 D2 May 17 00:13:21.239840 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 00:13:21.239915 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:13:21.239982 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 00:13:21.240052 kernel: pci 0004:01:00.0: supports D1 D2 May 17 00:13:21.240122 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:13:21.240201 kernel: pci_bus 0004:02: extended config space not accessible May 17 00:13:21.240280 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:13:21.240349 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 00:13:21.240419 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 00:13:21.240488 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 00:13:21.240556 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 00:13:21.240628 kernel: pci 0004:02:00.0: supports D1 D2 May 17 00:13:21.240697 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:13:21.240773 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 00:13:21.240840 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 00:13:21.240907 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:13:21.240968 kernel: pci_bus 0004:00: on NUMA node 0 May 17 00:13:21.241037 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 00:13:21.241105 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:13:21.241170 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:13:21.241234 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:13:21.241299 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:13:21.241364 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.241428 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:13:21.241493 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:13:21.241559 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:13:21.241624 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 00:13:21.241688 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:13:21.241753 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 00:13:21.241816 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:13:21.241881 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.241945 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242016 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242082 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242146 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242210 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242274 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242338 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242402 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242467 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242530 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242597 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242664 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:13:21.242731 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 00:13:21.242797 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 00:13:21.242867 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 00:13:21.242937 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 00:13:21.243008 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 00:13:21.243077 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 00:13:21.243146 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 00:13:21.243215 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:13:21.243281 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 00:13:21.243346 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:13:21.243411 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:13:21.243478 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 00:13:21.243544 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 00:13:21.243609 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 00:13:21.243676 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:13:21.243741 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 00:13:21.243806 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 00:13:21.243870 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:13:21.243930 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:13:21.243986 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 00:13:21.244050 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 00:13:21.244118 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:13:21.244179 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:13:21.244242 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:13:21.244309 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 00:13:21.244369 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:13:21.244439 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 00:13:21.244499 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:13:21.244510 kernel: iommu: Default domain type: Translated May 17 00:13:21.244519 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:13:21.244527 kernel: efivars: Registered efivars operations May 17 00:13:21.244594 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 17 00:13:21.244664 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 00:13:21.244733 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 00:13:21.244746 kernel: vgaarb: loaded May 17 00:13:21.244754 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:13:21.244763 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:13:21.244771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:13:21.244779 kernel: pnp: PnP ACPI init May 17 00:13:21.244851 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 00:13:21.244914 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 00:13:21.244977 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 00:13:21.245039 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 00:13:21.245100 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 00:13:21.245161 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 00:13:21.245222 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 00:13:21.245282 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 00:13:21.245293 kernel: pnp: PnP ACPI: found 1 devices May 17 00:13:21.245304 kernel: NET: Registered PF_INET protocol family May 17 00:13:21.245312 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245321 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:13:21.245329 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:13:21.245337 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:13:21.245346 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245354 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 00:13:21.245363 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245371 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:13:21.245381 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:13:21.245448 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 00:13:21.245460 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 00:13:21.245468 kernel: kvm [1]: GICv3: no GICV resource entry May 17 00:13:21.245476 kernel: kvm [1]: disabling GICv2 emulation May 17 00:13:21.245484 kernel: kvm [1]: GIC system register CPU interface enabled May 17 00:13:21.245492 kernel: kvm [1]: vgic interrupt IRQ9 May 17 00:13:21.245500 kernel: kvm [1]: VHE mode initialized successfully May 17 00:13:21.245509 kernel: Initialise system trusted keyrings May 17 00:13:21.245518 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 00:13:21.245526 kernel: Key type asymmetric registered May 17 00:13:21.245536 kernel: Asymmetric key parser 'x509' registered May 17 00:13:21.245544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:13:21.245552 kernel: io scheduler mq-deadline registered May 17 00:13:21.245561 kernel: io scheduler kyber registered May 17 00:13:21.245569 kernel: io scheduler bfq registered May 17 00:13:21.245577 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:13:21.245585 kernel: ACPI: button: Power Button [PWRB] May 17 00:13:21.245595 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 00:13:21.245603 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:13:21.245677 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 00:13:21.245740 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.245803 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.245864 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 00:13:21.245926 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 17 00:13:21.245991 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 17 00:13:21.246062 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 00:13:21.246123 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.246185 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.246246 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 00:13:21.246307 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 17 00:13:21.246368 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 17 00:13:21.246439 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 00:13:21.246502 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.246562 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.246624 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 00:13:21.246685 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 17 00:13:21.246747 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 17 00:13:21.246819 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 00:13:21.246882 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.246943 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.247008 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 00:13:21.247069 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 17 00:13:21.247133 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 17 00:13:21.247210 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 00:13:21.247276 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.247337 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.247398 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 00:13:21.247459 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 17 00:13:21.247520 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 17 00:13:21.247591 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 00:13:21.247654 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.247716 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.247776 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 00:13:21.247838 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 17 00:13:21.247898 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 17 00:13:21.247967 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 00:13:21.248031 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.248095 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.248157 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 00:13:21.248218 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 17 00:13:21.248278 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 17 00:13:21.248349 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 00:13:21.248412 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:13:21.248476 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:13:21.248539 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 00:13:21.248599 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 17 00:13:21.248661 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 17 00:13:21.248672 kernel: thunder_xcv, ver 1.0 May 17 00:13:21.248681 kernel: thunder_bgx, ver 1.0 May 17 00:13:21.248689 kernel: nicpf, ver 1.0 May 17 00:13:21.248697 kernel: nicvf, ver 1.0 May 17 00:13:21.248767 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:13:21.248830 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:13:19 UTC (1747440799) May 17 00:13:21.248841 kernel: efifb: probing for efifb May 17 00:13:21.248850 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 00:13:21.248858 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:13:21.248866 kernel: efifb: scrolling: redraw May 17 00:13:21.248874 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:13:21.248883 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:13:21.248893 kernel: fb0: EFI VGA frame buffer device May 17 00:13:21.248901 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 00:13:21.248910 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:13:21.248918 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:13:21.248926 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:13:21.248935 kernel: watchdog: Hard watchdog permanently disabled May 17 00:13:21.248943 kernel: NET: Registered PF_INET6 protocol family May 17 00:13:21.248951 kernel: Segment Routing with IPv6 May 17 00:13:21.248959 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:13:21.248969 kernel: NET: Registered PF_PACKET protocol family May 17 00:13:21.248977 kernel: Key type dns_resolver registered May 17 00:13:21.248985 kernel: registered taskstats version 1 May 17 00:13:21.248996 kernel: Loading compiled-in X.509 certificates May 17 00:13:21.249005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:13:21.249013 kernel: Key type .fscrypt registered May 17 00:13:21.249021 kernel: Key type fscrypt-provisioning registered May 17 00:13:21.249029 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:13:21.249037 kernel: ima: Allocated hash algorithm: sha1 May 17 00:13:21.249047 kernel: ima: No architecture policies found May 17 00:13:21.249055 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:13:21.249123 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 00:13:21.249192 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 00:13:21.249260 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 00:13:21.249326 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 00:13:21.249392 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 00:13:21.249458 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 00:13:21.249524 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 00:13:21.249593 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 00:13:21.249660 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 00:13:21.249726 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 00:13:21.249793 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 00:13:21.249859 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 00:13:21.249926 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 00:13:21.249996 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 00:13:21.250063 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 00:13:21.250131 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 00:13:21.250200 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 00:13:21.250265 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 00:13:21.250333 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 00:13:21.250398 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 00:13:21.250466 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 00:13:21.250532 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 00:13:21.250599 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 00:13:21.250667 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 00:13:21.250734 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 00:13:21.250801 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 00:13:21.250866 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 00:13:21.250932 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 00:13:21.251001 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 00:13:21.251069 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 00:13:21.251136 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 00:13:21.251207 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 00:13:21.251272 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 00:13:21.251341 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 00:13:21.251406 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 00:13:21.251472 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 00:13:21.251537 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 00:13:21.251603 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 00:13:21.251670 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 00:13:21.251736 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 00:13:21.251805 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 00:13:21.251870 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 00:13:21.251936 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 00:13:21.252004 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 00:13:21.252071 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 00:13:21.252136 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 00:13:21.252202 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 00:13:21.252267 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 00:13:21.252336 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 00:13:21.252401 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 00:13:21.252468 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 00:13:21.252532 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 00:13:21.252599 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 00:13:21.252664 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 00:13:21.252731 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 00:13:21.252796 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 00:13:21.252865 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 00:13:21.252930 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 00:13:21.252999 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 00:13:21.253065 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 00:13:21.253133 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 00:13:21.253144 kernel: clk: Disabling unused clocks May 17 00:13:21.253152 kernel: Freeing unused kernel memory: 39424K May 17 00:13:21.253160 kernel: Run /init as init process May 17 00:13:21.253170 kernel: with arguments: May 17 00:13:21.253179 kernel: /init May 17 00:13:21.253187 kernel: with environment: May 17 00:13:21.253195 kernel: HOME=/ May 17 00:13:21.253202 kernel: TERM=linux May 17 00:13:21.253210 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:13:21.253221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:13:21.253232 systemd[1]: Detected architecture arm64. May 17 00:13:21.253242 systemd[1]: Running in initrd. May 17 00:13:21.253250 systemd[1]: No hostname configured, using default hostname. May 17 00:13:21.253258 systemd[1]: Hostname set to . May 17 00:13:21.253267 systemd[1]: Initializing machine ID from random generator. May 17 00:13:21.253276 systemd[1]: Queued start job for default target initrd.target. May 17 00:13:21.253284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:21.253293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:21.253302 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:13:21.253312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:13:21.253321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:13:21.253330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:13:21.253340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:13:21.253349 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:13:21.253357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:21.253368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:21.253376 systemd[1]: Reached target paths.target - Path Units. May 17 00:13:21.253385 systemd[1]: Reached target slices.target - Slice Units. May 17 00:13:21.253393 systemd[1]: Reached target swap.target - Swaps. May 17 00:13:21.253402 systemd[1]: Reached target timers.target - Timer Units. May 17 00:13:21.253410 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:13:21.253419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:13:21.253428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:13:21.253438 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:13:21.253448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:21.253456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:13:21.253465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:21.253474 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:13:21.253482 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:13:21.253491 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:13:21.253500 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:13:21.253508 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:13:21.253517 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:13:21.253527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:13:21.253557 systemd-journald[899]: Collecting audit messages is disabled. May 17 00:13:21.253577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:21.253586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:13:21.253596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:13:21.253605 kernel: Bridge firewalling registered May 17 00:13:21.253614 systemd-journald[899]: Journal started May 17 00:13:21.253632 systemd-journald[899]: Runtime Journal (/run/log/journal/7af364b93484456d89cb6d5fc63f4e8d) is 8.0M, max 4.0G, 3.9G free. May 17 00:13:21.211066 systemd-modules-load[901]: Inserted module 'overlay' May 17 00:13:21.285743 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:13:21.233353 systemd-modules-load[901]: Inserted module 'br_netfilter' May 17 00:13:21.291401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:21.302215 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:13:21.313101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:13:21.323805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:21.348202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:13:21.365182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:13:21.371818 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:13:21.403136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:13:21.420098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:21.436947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:21.448578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:13:21.454497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:21.483184 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:13:21.490560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:13:21.502543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:13:21.527704 dracut-cmdline[941]: dracut-dracut-053 May 17 00:13:21.527704 dracut-cmdline[941]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:13:21.516619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:21.527049 systemd-resolved[944]: Positive Trust Anchors: May 17 00:13:21.527058 systemd-resolved[944]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:13:21.527090 systemd-resolved[944]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:13:21.542052 systemd-resolved[944]: Defaulting to hostname 'linux'. May 17 00:13:21.543426 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:13:21.579623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:21.683994 kernel: SCSI subsystem initialized May 17 00:13:21.698997 kernel: Loading iSCSI transport class v2.0-870. May 17 00:13:21.717001 kernel: iscsi: registered transport (tcp) May 17 00:13:21.744654 kernel: iscsi: registered transport (qla4xxx) May 17 00:13:21.744677 kernel: QLogic iSCSI HBA Driver May 17 00:13:21.788825 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:13:21.811160 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:13:21.850994 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:13:21.865522 kernel: device-mapper: uevent: version 1.0.3 May 17 00:13:21.865545 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:13:21.930999 kernel: raid6: neonx8 gen() 15842 MB/s May 17 00:13:21.955998 kernel: raid6: neonx4 gen() 15714 MB/s May 17 00:13:21.980998 kernel: raid6: neonx2 gen() 13280 MB/s May 17 00:13:22.005997 kernel: raid6: neonx1 gen() 10539 MB/s May 17 00:13:22.030998 kernel: raid6: int64x8 gen() 7001 MB/s May 17 00:13:22.055998 kernel: raid6: int64x4 gen() 7388 MB/s May 17 00:13:22.080994 kernel: raid6: int64x2 gen() 6150 MB/s May 17 00:13:22.108982 kernel: raid6: int64x1 gen() 5078 MB/s May 17 00:13:22.109018 kernel: raid6: using algorithm neonx8 gen() 15842 MB/s May 17 00:13:22.143387 kernel: raid6: .... xor() 11975 MB/s, rmw enabled May 17 00:13:22.143409 kernel: raid6: using neon recovery algorithm May 17 00:13:22.166362 kernel: xor: measuring software checksum speed May 17 00:13:22.166384 kernel: 8regs : 19669 MB/sec May 17 00:13:22.174305 kernel: 32regs : 19679 MB/sec May 17 00:13:22.182070 kernel: arm64_neon : 27079 MB/sec May 17 00:13:22.189710 kernel: xor: using function: arm64_neon (27079 MB/sec) May 17 00:13:22.250997 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:13:22.260681 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:13:22.275116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:22.288238 systemd-udevd[1135]: Using default interface naming scheme 'v255'. May 17 00:13:22.291248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:22.314131 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:13:22.328202 dracut-pre-trigger[1145]: rd.md=0: removing MD RAID activation May 17 00:13:22.354332 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:13:22.369096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:13:22.472373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:22.501291 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:13:22.501313 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:13:22.513165 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:13:22.684881 kernel: ACPI: bus type USB registered May 17 00:13:22.684896 kernel: usbcore: registered new interface driver usbfs May 17 00:13:22.684907 kernel: usbcore: registered new interface driver hub May 17 00:13:22.684917 kernel: usbcore: registered new device driver usb May 17 00:13:22.684927 kernel: PTP clock support registered May 17 00:13:22.684937 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 May 17 00:13:22.685089 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:13:22.685175 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 00:13:22.685259 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 00:13:22.685338 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:13:22.685349 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 May 17 00:13:22.685435 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:13:22.685446 kernel: igb 0003:03:00.0: Adding to iommu group 33 May 17 00:13:22.685533 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 17 00:13:22.685623 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 17 00:13:22.646391 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:13:22.693914 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:13:22.702060 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:22.719057 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:13:22.736201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:13:22.749654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:13:22.749707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:22.767348 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:13:22.778343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:13:22.921534 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 17 00:13:22.921759 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:13:22.921852 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 00:13:22.921930 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 00:13:22.922023 kernel: hub 1-0:1.0: USB hub found May 17 00:13:22.922128 kernel: hub 1-0:1.0: 4 ports detected May 17 00:13:22.922206 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 17 00:13:22.922296 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:13:22.922376 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:13:22.778386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:22.794967 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:22.935092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:22.945835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:13:22.965146 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:23.008985 kernel: hub 2-0:1.0: USB hub found May 17 00:13:23.009124 kernel: hub 2-0:1.0: 4 ports detected May 17 00:13:23.009206 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 00:13:23.009296 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 00:13:23.027181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:13:23.133079 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 00:13:23.133214 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 00:13:23.133291 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 00:13:23.133387 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:13:23.133466 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:94 May 17 00:13:23.133543 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 00:13:23.133621 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:13:23.133698 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 17 00:13:23.133784 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 00:13:23.133234 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:23.356080 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 00:13:23.356197 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:13:23.356209 kernel: GPT:9289727 != 1875385007 May 17 00:13:23.356219 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:13:23.356229 kernel: GPT:9289727 != 1875385007 May 17 00:13:23.356239 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:13:23.356253 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:13:23.356263 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 00:13:23.356354 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (1181) May 17 00:13:23.356368 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 00:13:23.356445 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1180) May 17 00:13:23.356456 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:95 May 17 00:13:23.356532 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 00:13:23.356610 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:13:23.356689 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 00:13:23.356772 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 00:13:23.356859 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 00:13:23.356937 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 00:13:23.297543 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 17 00:13:23.369150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 17 00:13:23.379837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:13:23.391226 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:13:23.411202 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 00:13:23.438138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:13:23.465414 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:13:23.465431 disk-uuid[1289]: Primary Header is updated. May 17 00:13:23.465431 disk-uuid[1289]: Secondary Entries is updated. May 17 00:13:23.465431 disk-uuid[1289]: Secondary Header is updated. May 17 00:13:23.501520 kernel: hub 1-3:1.0: USB hub found May 17 00:13:23.501673 kernel: hub 1-3:1.0: 4 ports detected May 17 00:13:23.590003 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 00:13:23.625134 kernel: hub 2-3:1.0: USB hub found May 17 00:13:23.625345 kernel: hub 2-3:1.0: 4 ports detected May 17 00:13:23.648000 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:13:23.660995 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 00:13:23.683556 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 17 00:13:23.683640 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:13:24.028606 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 00:13:24.332001 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:13:24.346996 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 00:13:24.363995 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 00:13:24.464004 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:13:24.464088 disk-uuid[1290]: The operation has completed successfully. May 17 00:13:24.485230 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:13:24.485316 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:13:24.522171 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:13:24.533277 sh[1477]: Success May 17 00:13:24.552993 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:13:24.585014 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:13:24.613162 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:13:24.623195 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:13:24.715807 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:13:24.715823 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:24.715833 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:13:24.715844 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:13:24.715854 kernel: BTRFS info (device dm-0): using free space tree May 17 00:13:24.715864 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:13:24.643824 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:13:24.721799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:13:24.732106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:13:24.813274 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:24.813289 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:24.813299 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:24.813309 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:24.813319 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:24.741834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:13:24.851353 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:24.841275 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:13:24.879101 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:13:24.922168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:13:24.944154 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:13:24.956875 ignition[1584]: Ignition 2.19.0 May 17 00:13:24.956882 ignition[1584]: Stage: fetch-offline May 17 00:13:24.962113 unknown[1584]: fetched base config from "system" May 17 00:13:24.956922 ignition[1584]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:24.962120 unknown[1584]: fetched user config from "system" May 17 00:13:24.956930 ignition[1584]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:24.968203 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:13:24.957081 ignition[1584]: parsed url from cmdline: "" May 17 00:13:24.969119 systemd-networkd[1688]: lo: Link UP May 17 00:13:24.957084 ignition[1584]: no config URL provided May 17 00:13:24.969123 systemd-networkd[1688]: lo: Gained carrier May 17 00:13:24.957089 ignition[1584]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:13:24.972743 systemd-networkd[1688]: Enumeration completed May 17 00:13:24.957141 ignition[1584]: parsing config with SHA512: d7bfd0dda6488b5d259645eb48211a45d6daff5b8649cccd5d9fc83d5c9060f61f5e3189b613b0e901025e9f77c340e1da95c211288dfdcb0aab9e852808229a May 17 00:13:24.973879 systemd-networkd[1688]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:24.966081 ignition[1584]: fetch-offline: fetch-offline passed May 17 00:13:24.978613 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:13:24.966086 ignition[1584]: POST message to Packet Timeline May 17 00:13:24.988875 systemd[1]: Reached target network.target - Network. May 17 00:13:24.966091 ignition[1584]: POST Status error: resource requires networking May 17 00:13:24.998575 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:13:24.966164 ignition[1584]: Ignition finished successfully May 17 00:13:25.012165 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:13:25.036041 ignition[1706]: Ignition 2.19.0 May 17 00:13:25.025426 systemd-networkd[1688]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:25.036048 ignition[1706]: Stage: kargs May 17 00:13:25.076503 systemd-networkd[1688]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:25.036286 ignition[1706]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:25.036295 ignition[1706]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:25.037310 ignition[1706]: kargs: kargs passed May 17 00:13:25.037314 ignition[1706]: POST message to Packet Timeline May 17 00:13:25.037327 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:25.040341 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46938->[::1]:53: read: connection refused May 17 00:13:25.240473 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:13:25.240890 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60609->[::1]:53: read: connection refused May 17 00:13:25.641900 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:13:25.651644 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 00:13:25.649830 systemd-networkd[1688]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:25.642278 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50796->[::1]:53: read: connection refused May 17 00:13:26.267001 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 00:13:26.269837 systemd-networkd[1688]: eno1: Link UP May 17 00:13:26.270049 systemd-networkd[1688]: eno2: Link UP May 17 00:13:26.270175 systemd-networkd[1688]: enP1p1s0f0np0: Link UP May 17 00:13:26.270320 systemd-networkd[1688]: enP1p1s0f0np0: Gained carrier May 17 00:13:26.281147 systemd-networkd[1688]: enP1p1s0f1np1: Link UP May 17 00:13:26.315021 systemd-networkd[1688]: enP1p1s0f0np0: DHCPv4 address 147.28.129.25/31, gateway 147.28.129.24 acquired from 147.28.144.140 May 17 00:13:26.442579 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:13:26.442976 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39326->[::1]:53: read: connection refused May 17 00:13:26.656252 systemd-networkd[1688]: enP1p1s0f1np1: Gained carrier May 17 00:13:27.280069 systemd-networkd[1688]: enP1p1s0f0np0: Gained IPv6LL May 17 00:13:28.044227 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:13:28.044649 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38247->[::1]:53: read: connection refused May 17 00:13:28.304054 systemd-networkd[1688]: enP1p1s0f1np1: Gained IPv6LL May 17 00:13:31.249533 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:13:31.802583 ignition[1706]: GET result: OK May 17 00:13:32.142128 ignition[1706]: Ignition finished successfully May 17 00:13:32.145963 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:13:32.164112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:13:32.179657 ignition[1730]: Ignition 2.19.0 May 17 00:13:32.179664 ignition[1730]: Stage: disks May 17 00:13:32.179822 ignition[1730]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:32.179831 ignition[1730]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:32.180769 ignition[1730]: disks: disks passed May 17 00:13:32.180773 ignition[1730]: POST message to Packet Timeline May 17 00:13:32.180786 ignition[1730]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:32.782955 ignition[1730]: GET result: OK May 17 00:13:33.292107 ignition[1730]: Ignition finished successfully May 17 00:13:33.295043 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:13:33.300970 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:13:33.308451 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:13:33.316426 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:13:33.324936 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:13:33.333789 systemd[1]: Reached target basic.target - Basic System. May 17 00:13:33.356151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:13:33.371365 systemd-fsck[1752]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:13:33.375104 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:13:33.394081 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:13:33.458997 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:13:33.459218 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:13:33.469686 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:13:33.493068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:33.500993 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1762) May 17 00:13:33.501011 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:33.501022 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:33.501033 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:33.501995 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:33.502023 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:33.595069 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:13:33.601470 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:13:33.612734 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:13:33.636430 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:13:33.657898 coreos-metadata[1780]: May 17 00:13:33.654 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:33.636461 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:33.685341 coreos-metadata[1782]: May 17 00:13:33.654 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:33.648957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:33.663391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:13:33.695113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:13:33.728340 initrd-setup-root[1803]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:13:33.734597 initrd-setup-root[1811]: cut: /sysroot/etc/group: No such file or directory May 17 00:13:33.740858 initrd-setup-root[1819]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:13:33.747059 initrd-setup-root[1826]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:13:33.817798 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:13:33.841059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:13:33.871554 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:33.847334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:13:33.877823 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:13:33.892974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:13:33.904294 ignition[1900]: INFO : Ignition 2.19.0 May 17 00:13:33.904294 ignition[1900]: INFO : Stage: mount May 17 00:13:33.914805 ignition[1900]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:33.914805 ignition[1900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:33.914805 ignition[1900]: INFO : mount: mount passed May 17 00:13:33.914805 ignition[1900]: INFO : POST message to Packet Timeline May 17 00:13:33.914805 ignition[1900]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:34.165608 coreos-metadata[1780]: May 17 00:13:34.165 INFO Fetch successful May 17 00:13:34.201608 coreos-metadata[1782]: May 17 00:13:34.201 INFO Fetch successful May 17 00:13:34.210536 coreos-metadata[1780]: May 17 00:13:34.210 INFO wrote hostname ci-4081.3.3-n-02409cc2a5 to /sysroot/etc/hostname May 17 00:13:34.213695 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:34.250133 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:13:34.252018 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:13:34.427524 ignition[1900]: INFO : GET result: OK May 17 00:13:34.818728 ignition[1900]: INFO : Ignition finished successfully May 17 00:13:34.821088 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:13:34.840069 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:13:34.851917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:34.886870 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1924) May 17 00:13:34.886909 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:34.901087 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:34.913968 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:34.936608 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:34.936631 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:34.944708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:34.973445 ignition[1942]: INFO : Ignition 2.19.0 May 17 00:13:34.973445 ignition[1942]: INFO : Stage: files May 17 00:13:34.982791 ignition[1942]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:34.982791 ignition[1942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:34.982791 ignition[1942]: DEBUG : files: compiled without relabeling support, skipping May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:13:34.982791 ignition[1942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:13:34.982791 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:13:34.982791 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 17 00:13:34.979000 unknown[1942]: wrote ssh authorized keys file for user: core May 17 00:13:35.778444 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:13:36.727254 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:36.738147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 17 00:13:37.004147 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:13:37.261446 ignition[1942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:13:37.261446 ignition[1942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:13:37.286110 ignition[1942]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:37.286110 ignition[1942]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:37.286110 ignition[1942]: INFO : files: files passed May 17 00:13:37.286110 ignition[1942]: INFO : POST message to Packet Timeline May 17 00:13:37.286110 ignition[1942]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:37.854267 ignition[1942]: INFO : GET result: OK May 17 00:13:38.194577 ignition[1942]: INFO : Ignition finished successfully May 17 00:13:38.197175 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:13:38.214120 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:13:38.220871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:13:38.232839 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:13:38.232916 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:13:38.268089 initrd-setup-root-after-ignition[1984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:38.268089 initrd-setup-root-after-ignition[1984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:38.251072 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:38.314455 initrd-setup-root-after-ignition[1988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:38.263870 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:13:38.293139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:13:38.328622 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:13:38.328698 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:13:38.338767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:13:38.355028 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:13:38.366514 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:13:38.377174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:13:38.399792 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:38.422137 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:13:38.438608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:38.448010 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:38.459478 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:13:38.471002 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:13:38.471102 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:38.482735 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:13:38.493916 systemd[1]: Stopped target basic.target - Basic System. May 17 00:13:38.505307 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:13:38.516587 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:38.527760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:13:38.538931 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:13:38.550044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:13:38.561263 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:13:38.572488 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:13:38.589671 systemd[1]: Stopped target swap.target - Swaps. May 17 00:13:38.600925 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:13:38.601027 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:13:38.612450 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:38.623631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:38.634924 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:13:38.635517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:38.646251 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:13:38.646346 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:13:38.657739 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:13:38.657843 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:13:38.669160 systemd[1]: Stopped target paths.target - Path Units. May 17 00:13:38.680337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:13:38.680427 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:38.697519 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:13:38.709283 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:13:38.720859 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:13:38.720962 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:13:38.828317 ignition[2009]: INFO : Ignition 2.19.0 May 17 00:13:38.828317 ignition[2009]: INFO : Stage: umount May 17 00:13:38.828317 ignition[2009]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:38.828317 ignition[2009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:38.828317 ignition[2009]: INFO : umount: umount passed May 17 00:13:38.828317 ignition[2009]: INFO : POST message to Packet Timeline May 17 00:13:38.828317 ignition[2009]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:38.732560 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:13:38.732683 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:13:38.744284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:13:38.744372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:38.755934 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:13:38.756024 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:13:38.767622 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:13:38.767704 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:38.791109 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:13:38.797238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:13:38.797341 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:38.810804 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:13:38.822284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:13:38.822389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:38.834396 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:13:38.834482 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:13:38.848181 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:13:38.849052 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:13:38.849129 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:13:38.859030 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:13:38.859106 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:13:39.291703 ignition[2009]: INFO : GET result: OK May 17 00:13:39.697944 ignition[2009]: INFO : Ignition finished successfully May 17 00:13:39.701393 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:13:39.701592 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:13:39.708314 systemd[1]: Stopped target network.target - Network. May 17 00:13:39.717500 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:13:39.717560 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:13:39.727152 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:13:39.727183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:13:39.736695 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:13:39.736743 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:13:39.746263 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:13:39.746299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:13:39.756003 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:13:39.756033 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:13:39.766007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:13:39.775585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:13:39.776009 systemd-networkd[1688]: enP1p1s0f1np1: DHCPv6 lease lost May 17 00:13:39.785534 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:13:39.785656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:13:39.788145 systemd-networkd[1688]: enP1p1s0f0np0: DHCPv6 lease lost May 17 00:13:39.797766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:13:39.797893 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:39.805907 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:13:39.806050 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:13:39.816202 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:13:39.816351 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:39.838127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:13:39.844999 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:13:39.845061 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:13:39.855125 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:13:39.855156 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:39.865269 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:13:39.865297 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:13:39.875674 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:39.899409 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:13:39.899515 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:39.909332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:13:39.909513 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:13:39.918528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:13:39.918581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:39.929263 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:13:39.929299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:13:39.940354 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:13:39.940390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:13:39.951004 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:13:39.951056 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:39.972136 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:13:39.984236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:13:39.984299 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:39.995481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:13:39.995512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:40.007277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:13:40.007347 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:13:40.562508 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:13:40.562612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:13:40.573893 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:13:40.596093 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:13:40.605713 systemd[1]: Switching root. May 17 00:13:40.661353 systemd-journald[899]: Journal stopped May 17 00:13:42.643287 systemd-journald[899]: Received SIGTERM from PID 1 (systemd). May 17 00:13:42.643315 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:13:42.643326 kernel: SELinux: policy capability open_perms=1 May 17 00:13:42.643334 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:13:42.643341 kernel: SELinux: policy capability always_check_network=0 May 17 00:13:42.643349 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:13:42.643358 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:13:42.643367 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:13:42.643376 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:13:42.643384 kernel: audit: type=1403 audit(1747440820.847:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:13:42.643393 systemd[1]: Successfully loaded SELinux policy in 115.603ms. May 17 00:13:42.643402 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.676ms. May 17 00:13:42.643413 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:13:42.643422 systemd[1]: Detected architecture arm64. May 17 00:13:42.643433 systemd[1]: Detected first boot. May 17 00:13:42.643442 systemd[1]: Hostname set to . May 17 00:13:42.643451 systemd[1]: Initializing machine ID from random generator. May 17 00:13:42.643460 zram_generator::config[2073]: No configuration found. May 17 00:13:42.643472 systemd[1]: Populated /etc with preset unit settings. May 17 00:13:42.643481 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:13:42.643490 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:13:42.643499 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:13:42.643509 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:13:42.643518 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:13:42.643527 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:13:42.643537 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:13:42.643548 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:13:42.643557 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:13:42.643567 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:13:42.643576 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:13:42.643585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:42.643594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:42.643604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:13:42.643614 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:13:42.643624 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:13:42.643634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:13:42.643643 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:13:42.643652 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:42.643662 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:13:42.643671 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:13:42.643685 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:13:42.643694 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:13:42.643705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:42.643715 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:13:42.643725 systemd[1]: Reached target slices.target - Slice Units. May 17 00:13:42.643734 systemd[1]: Reached target swap.target - Swaps. May 17 00:13:42.643744 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:13:42.643753 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:13:42.643762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:42.643773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:13:42.643783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:42.643793 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:13:42.643802 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:13:42.643812 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:13:42.643823 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:13:42.643832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:13:42.643842 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:13:42.643852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:13:42.643862 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:13:42.643872 systemd[1]: Reached target machines.target - Containers. May 17 00:13:42.643881 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:13:42.643891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:13:42.643902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:13:42.643912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:13:42.643921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:13:42.643931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:13:42.643941 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:13:42.643950 kernel: ACPI: bus type drm_connector registered May 17 00:13:42.643959 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:13:42.643968 kernel: fuse: init (API version 7.39) May 17 00:13:42.643977 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:13:42.643991 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:13:42.644001 kernel: loop: module loaded May 17 00:13:42.644010 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:13:42.644020 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:13:42.644029 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:13:42.644039 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:13:42.644048 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:13:42.644058 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:13:42.644084 systemd-journald[2177]: Collecting audit messages is disabled. May 17 00:13:42.644104 systemd-journald[2177]: Journal started May 17 00:13:42.644124 systemd-journald[2177]: Runtime Journal (/run/log/journal/c29f6c43f9c247cebf1eab5cf6c1d692) is 8.0M, max 4.0G, 3.9G free. May 17 00:13:41.366565 systemd[1]: Queued start job for default target multi-user.target. May 17 00:13:41.386372 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:13:41.386710 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:13:41.386985 systemd[1]: systemd-journald.service: Consumed 3.470s CPU time. May 17 00:13:42.668004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:13:42.695001 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:13:42.716005 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:13:42.738985 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:13:42.739048 systemd[1]: Stopped verity-setup.service. May 17 00:13:42.764003 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:13:42.769098 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:13:42.774532 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:13:42.779913 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:13:42.785197 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:13:42.790497 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:13:42.795720 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:13:42.801075 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:13:42.806502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:42.811815 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:13:42.811968 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:13:42.817350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:13:42.817491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:13:42.822855 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:13:42.823002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:13:42.828149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:13:42.828294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:13:42.833355 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:13:42.833493 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:13:42.838714 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:13:42.838855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:13:42.843919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:13:42.848902 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:13:42.854105 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:13:42.859192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:42.874563 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:13:42.897188 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:13:42.903130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:13:42.907942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:13:42.907977 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:13:42.913550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:13:42.919300 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:13:42.925123 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:13:42.929956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:13:42.931225 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:13:42.937022 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:13:42.942579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:13:42.943650 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:13:42.948231 systemd-journald[2177]: Time spent on flushing to /var/log/journal/c29f6c43f9c247cebf1eab5cf6c1d692 is 26.900ms for 2344 entries. May 17 00:13:42.948231 systemd-journald[2177]: System Journal (/var/log/journal/c29f6c43f9c247cebf1eab5cf6c1d692) is 8.0M, max 195.6M, 187.6M free. May 17 00:13:42.991432 systemd-journald[2177]: Received client request to flush runtime journal. May 17 00:13:42.991481 kernel: loop0: detected capacity change from 0 to 207008 May 17 00:13:42.960687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:13:42.961799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:13:42.967679 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:13:42.973567 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:13:42.979387 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:13:42.995870 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:13:43.005996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:13:43.010053 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:13:43.014637 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:13:43.021363 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:13:43.026225 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:13:43.030943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:43.035618 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:13:43.046315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:13:43.047998 kernel: loop1: detected capacity change from 0 to 114432 May 17 00:13:43.073324 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:13:43.079231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:13:43.084575 udevadm[2221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:13:43.085792 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:13:43.086363 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:13:43.104214 systemd-tmpfiles[2249]: ACLs are not supported, ignoring. May 17 00:13:43.104227 systemd-tmpfiles[2249]: ACLs are not supported, ignoring. May 17 00:13:43.107979 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:43.112055 kernel: loop2: detected capacity change from 0 to 114328 May 17 00:13:43.188247 ldconfig[2204]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:13:43.183487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:13:43.189003 kernel: loop3: detected capacity change from 0 to 8 May 17 00:13:43.237004 kernel: loop4: detected capacity change from 0 to 207008 May 17 00:13:43.254996 kernel: loop5: detected capacity change from 0 to 114432 May 17 00:13:43.270999 kernel: loop6: detected capacity change from 0 to 114328 May 17 00:13:43.285870 (sd-merge)[2266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 17 00:13:43.286141 kernel: loop7: detected capacity change from 0 to 8 May 17 00:13:43.286313 (sd-merge)[2266]: Merged extensions into '/usr'. May 17 00:13:43.293172 systemd[1]: Reloading requested from client PID 2216 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:13:43.293184 systemd[1]: Reloading... May 17 00:13:43.335996 zram_generator::config[2293]: No configuration found. May 17 00:13:43.429455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:13:43.477808 systemd[1]: Reloading finished in 184 ms. May 17 00:13:43.505374 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:13:43.511485 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:13:43.531201 systemd[1]: Starting ensure-sysext.service... May 17 00:13:43.537081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:13:43.543623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:43.550457 systemd[1]: Reloading requested from client PID 2345 ('systemctl') (unit ensure-sysext.service)... May 17 00:13:43.550466 systemd[1]: Reloading... May 17 00:13:43.556942 systemd-tmpfiles[2346]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:13:43.557197 systemd-tmpfiles[2346]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:13:43.557812 systemd-tmpfiles[2346]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:13:43.558040 systemd-tmpfiles[2346]: ACLs are not supported, ignoring. May 17 00:13:43.558086 systemd-tmpfiles[2346]: ACLs are not supported, ignoring. May 17 00:13:43.561099 systemd-tmpfiles[2346]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:13:43.561107 systemd-tmpfiles[2346]: Skipping /boot May 17 00:13:43.567883 systemd-tmpfiles[2346]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:13:43.567891 systemd-tmpfiles[2346]: Skipping /boot May 17 00:13:43.569119 systemd-udevd[2347]: Using default interface naming scheme 'v255'. May 17 00:13:43.609009 zram_generator::config[2399]: No configuration found. May 17 00:13:43.609055 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2381) May 17 00:13:43.641998 kernel: IPMI message handler: version 39.2 May 17 00:13:43.651996 kernel: ipmi device interface May 17 00:13:43.663992 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 00:13:43.664023 kernel: ipmi_si: IPMI System Interface driver May 17 00:13:43.677581 kernel: ipmi_si: Unable to find any System Interface(s) May 17 00:13:43.728892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:13:43.792234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 00:13:43.796872 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:13:43.797268 systemd[1]: Reloading finished in 246 ms. May 17 00:13:43.816694 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:43.838458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:43.855742 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:13:43.864538 systemd[1]: Finished ensure-sysext.service. May 17 00:13:43.904256 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:13:43.910426 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:13:43.915594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:13:43.916776 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:13:43.922716 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:13:43.928692 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:13:43.934596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:13:43.935083 lvm[2527]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:13:43.940360 augenrules[2544]: No rules May 17 00:13:43.940538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:13:43.945508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:13:43.946428 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:13:43.952439 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:13:43.959068 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:13:43.965832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:13:43.972047 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:13:43.977637 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:13:43.983307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:43.988682 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:13:43.993659 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:13:43.999099 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:13:44.003980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:13:44.004125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:13:44.008968 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:13:44.009094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:13:44.013753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:13:44.013868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:13:44.018588 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:13:44.018707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:13:44.023504 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:13:44.029065 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:13:44.035843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:44.047846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:44.070204 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:13:44.074483 lvm[2574]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:13:44.074696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:13:44.074765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:13:44.075948 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:13:44.082441 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:13:44.087115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:13:44.087504 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:13:44.092320 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:13:44.110440 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:13:44.116513 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:13:44.168385 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:13:44.173325 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:13:44.178851 systemd-resolved[2554]: Positive Trust Anchors: May 17 00:13:44.178864 systemd-resolved[2554]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:13:44.178897 systemd-resolved[2554]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:13:44.181836 systemd-networkd[2553]: lo: Link UP May 17 00:13:44.181842 systemd-networkd[2553]: lo: Gained carrier May 17 00:13:44.182502 systemd-resolved[2554]: Using system hostname 'ci-4081.3.3-n-02409cc2a5'. May 17 00:13:44.183875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:13:44.185600 systemd-networkd[2553]: bond0: netdev ready May 17 00:13:44.188311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:44.192610 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:13:44.195496 systemd-networkd[2553]: Enumeration completed May 17 00:13:44.196912 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:13:44.201167 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:13:44.205616 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:13:44.208930 systemd-networkd[2553]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:ed:dc.network. May 17 00:13:44.210009 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:13:44.214339 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:13:44.218703 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:13:44.218723 systemd[1]: Reached target paths.target - Path Units. May 17 00:13:44.223066 systemd[1]: Reached target timers.target - Timer Units. May 17 00:13:44.227901 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:13:44.233612 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:13:44.248225 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:13:44.253076 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:13:44.257624 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:13:44.262140 systemd[1]: Reached target network.target - Network. May 17 00:13:44.266492 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:13:44.270712 systemd[1]: Reached target basic.target - Basic System. May 17 00:13:44.274918 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:13:44.274938 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:13:44.288058 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:13:44.293545 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:13:44.299046 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:13:44.304589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:13:44.310147 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:13:44.314554 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:13:44.314891 coreos-metadata[2605]: May 17 00:13:44.314 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:44.315549 jq[2609]: false May 17 00:13:44.315702 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:13:44.317430 coreos-metadata[2605]: May 17 00:13:44.317 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:44.320239 dbus-daemon[2606]: [system] SELinux support is enabled May 17 00:13:44.321255 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:13:44.326786 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:13:44.329738 extend-filesystems[2610]: Found loop4 May 17 00:13:44.335900 extend-filesystems[2610]: Found loop5 May 17 00:13:44.335900 extend-filesystems[2610]: Found loop6 May 17 00:13:44.335900 extend-filesystems[2610]: Found loop7 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p1 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p2 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p3 May 17 00:13:44.335900 extend-filesystems[2610]: Found usr May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p4 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p6 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p7 May 17 00:13:44.335900 extend-filesystems[2610]: Found nvme0n1p9 May 17 00:13:44.335900 extend-filesystems[2610]: Checking size of /dev/nvme0n1p9 May 17 00:13:44.472561 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 17 00:13:44.472585 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2367) May 17 00:13:44.332520 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:13:44.472691 extend-filesystems[2610]: Resized partition /dev/nvme0n1p9 May 17 00:13:44.344582 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:13:44.482083 extend-filesystems[2630]: resize2fs 1.47.1 (20-May-2024) May 17 00:13:44.350531 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:13:44.394915 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:13:44.395556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:13:44.396264 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:13:44.492254 update_engine[2638]: I20250517 00:13:44.439289 2638 main.cc:92] Flatcar Update Engine starting May 17 00:13:44.492254 update_engine[2638]: I20250517 00:13:44.441691 2638 update_check_scheduler.cc:74] Next update check in 5m30s May 17 00:13:44.402998 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:13:44.492496 jq[2639]: true May 17 00:13:44.411334 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:13:44.423945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:13:44.492794 tar[2641]: linux-arm64/LICENSE May 17 00:13:44.492794 tar[2641]: linux-arm64/helm May 17 00:13:44.424124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:13:44.493100 jq[2642]: true May 17 00:13:44.424395 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:13:44.424552 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:13:44.424842 systemd-logind[2631]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:13:44.431158 systemd-logind[2631]: New seat seat0. May 17 00:13:44.432050 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:13:44.432215 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:13:44.450543 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:13:44.450942 (ntainerd)[2643]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:13:44.468567 systemd[1]: Started update-engine.service - Update Engine. May 17 00:13:44.478264 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:13:44.478567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:13:44.487295 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:13:44.487431 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:13:44.501826 bash[2663]: Updated "/home/core/.ssh/authorized_keys" May 17 00:13:44.512275 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:13:44.521027 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:13:44.530998 systemd[1]: Starting sshkeys.service... May 17 00:13:44.542837 locksmithd[2664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:13:44.544169 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:13:44.550318 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:13:44.570070 coreos-metadata[2681]: May 17 00:13:44.570 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:44.571258 coreos-metadata[2681]: May 17 00:13:44.571 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:44.596904 containerd[2643]: time="2025-05-17T00:13:44.596821560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:13:44.619082 containerd[2643]: time="2025-05-17T00:13:44.619018400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.620401 containerd[2643]: time="2025-05-17T00:13:44.620366200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:44.620401 containerd[2643]: time="2025-05-17T00:13:44.620397640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:13:44.620491 containerd[2643]: time="2025-05-17T00:13:44.620412920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:13:44.620611 containerd[2643]: time="2025-05-17T00:13:44.620586320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:13:44.620611 containerd[2643]: time="2025-05-17T00:13:44.620604880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.620664 containerd[2643]: time="2025-05-17T00:13:44.620653280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:44.620683 containerd[2643]: time="2025-05-17T00:13:44.620666720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.620832 containerd[2643]: time="2025-05-17T00:13:44.620816240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:44.620864 containerd[2643]: time="2025-05-17T00:13:44.620831960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.620864 containerd[2643]: time="2025-05-17T00:13:44.620845840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:44.620897 containerd[2643]: time="2025-05-17T00:13:44.620855640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.620963 containerd[2643]: time="2025-05-17T00:13:44.620938400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.621153 containerd[2643]: time="2025-05-17T00:13:44.621138280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:44.621277 containerd[2643]: time="2025-05-17T00:13:44.621239760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:44.621277 containerd[2643]: time="2025-05-17T00:13:44.621254760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:13:44.621342 containerd[2643]: time="2025-05-17T00:13:44.621330200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:13:44.621386 containerd[2643]: time="2025-05-17T00:13:44.621368920Z" level=info msg="metadata content store policy set" policy=shared May 17 00:13:44.628145 containerd[2643]: time="2025-05-17T00:13:44.628117640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:13:44.628209 containerd[2643]: time="2025-05-17T00:13:44.628165720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:13:44.628209 containerd[2643]: time="2025-05-17T00:13:44.628181360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:13:44.628209 containerd[2643]: time="2025-05-17T00:13:44.628195960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:13:44.628276 containerd[2643]: time="2025-05-17T00:13:44.628209880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:13:44.628367 containerd[2643]: time="2025-05-17T00:13:44.628348320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:13:44.628584 containerd[2643]: time="2025-05-17T00:13:44.628559400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:13:44.628677 containerd[2643]: time="2025-05-17T00:13:44.628661640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:13:44.628698 containerd[2643]: time="2025-05-17T00:13:44.628677800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:13:44.628698 containerd[2643]: time="2025-05-17T00:13:44.628691280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:13:44.628784 containerd[2643]: time="2025-05-17T00:13:44.628704280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628784 containerd[2643]: time="2025-05-17T00:13:44.628725040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628784 containerd[2643]: time="2025-05-17T00:13:44.628737560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628784 containerd[2643]: time="2025-05-17T00:13:44.628751560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628784 containerd[2643]: time="2025-05-17T00:13:44.628765440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628784 containerd[2643]: time="2025-05-17T00:13:44.628778800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628791480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628803160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628826520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628841600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628853960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628867080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:13:44.628879 containerd[2643]: time="2025-05-17T00:13:44.628878760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628892440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628904480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628916960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628929840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628944200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628955640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628967120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628978560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629004 containerd[2643]: time="2025-05-17T00:13:44.628999080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:13:44.629152 containerd[2643]: time="2025-05-17T00:13:44.629020480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629152 containerd[2643]: time="2025-05-17T00:13:44.629041800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629152 containerd[2643]: time="2025-05-17T00:13:44.629052880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:13:44.629204 containerd[2643]: time="2025-05-17T00:13:44.629159800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:13:44.629204 containerd[2643]: time="2025-05-17T00:13:44.629175640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:13:44.629204 containerd[2643]: time="2025-05-17T00:13:44.629190720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:13:44.629257 containerd[2643]: time="2025-05-17T00:13:44.629203720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:13:44.629257 containerd[2643]: time="2025-05-17T00:13:44.629213280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629257 containerd[2643]: time="2025-05-17T00:13:44.629226200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:13:44.629257 containerd[2643]: time="2025-05-17T00:13:44.629237000Z" level=info msg="NRI interface is disabled by configuration." May 17 00:13:44.629257 containerd[2643]: time="2025-05-17T00:13:44.629248080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:13:44.629644 containerd[2643]: time="2025-05-17T00:13:44.629588560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:13:44.629644 containerd[2643]: time="2025-05-17T00:13:44.629642600Z" level=info msg="Connect containerd service" May 17 00:13:44.629786 containerd[2643]: time="2025-05-17T00:13:44.629668920Z" level=info msg="using legacy CRI server" May 17 00:13:44.629786 containerd[2643]: time="2025-05-17T00:13:44.629676440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:13:44.629786 containerd[2643]: time="2025-05-17T00:13:44.629749080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:13:44.630383 containerd[2643]: time="2025-05-17T00:13:44.630361640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:13:44.630605 containerd[2643]: time="2025-05-17T00:13:44.630565640Z" level=info msg="Start subscribing containerd event" May 17 00:13:44.630634 containerd[2643]: time="2025-05-17T00:13:44.630623320Z" level=info msg="Start recovering state" May 17 00:13:44.630702 containerd[2643]: time="2025-05-17T00:13:44.630691080Z" level=info msg="Start event monitor" May 17 00:13:44.630723 containerd[2643]: time="2025-05-17T00:13:44.630705680Z" level=info msg="Start snapshots syncer" May 17 00:13:44.630723 containerd[2643]: time="2025-05-17T00:13:44.630715320Z" level=info msg="Start cni network conf syncer for default" May 17 00:13:44.630760 containerd[2643]: time="2025-05-17T00:13:44.630723480Z" level=info msg="Start streaming server" May 17 00:13:44.630843 containerd[2643]: time="2025-05-17T00:13:44.630827600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:13:44.630892 containerd[2643]: time="2025-05-17T00:13:44.630882400Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:13:44.630981 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:13:44.631391 containerd[2643]: time="2025-05-17T00:13:44.631360080Z" level=info msg="containerd successfully booted in 0.036083s" May 17 00:13:44.789124 tar[2641]: linux-arm64/README.md May 17 00:13:44.806649 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:13:44.855006 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 17 00:13:44.871036 extend-filesystems[2630]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:13:44.871036 extend-filesystems[2630]: old_desc_blocks = 1, new_desc_blocks = 112 May 17 00:13:44.871036 extend-filesystems[2630]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 17 00:13:44.901211 extend-filesystems[2610]: Resized filesystem in /dev/nvme0n1p9 May 17 00:13:44.901211 extend-filesystems[2610]: Found nvme1n1 May 17 00:13:44.873562 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:13:44.873825 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:13:45.083318 sshd_keygen[2636]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:13:45.102116 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:13:45.126360 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:13:45.135568 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:13:45.135747 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:13:45.142443 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:13:45.155481 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:13:45.161899 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:13:45.168122 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:13:45.173449 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:13:45.317553 coreos-metadata[2605]: May 17 00:13:45.317 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 00:13:45.317957 coreos-metadata[2605]: May 17 00:13:45.317 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:45.504005 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 00:13:45.520998 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 17 00:13:45.524312 systemd-networkd[2553]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:ed:dd.network. May 17 00:13:45.571351 coreos-metadata[2681]: May 17 00:13:45.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 00:13:45.571810 coreos-metadata[2681]: May 17 00:13:45.571 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:46.120004 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 00:13:46.136491 systemd-networkd[2553]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 00:13:46.136994 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 17 00:13:46.137565 systemd-networkd[2553]: enP1p1s0f0np0: Link UP May 17 00:13:46.137860 systemd-networkd[2553]: enP1p1s0f0np0: Gained carrier May 17 00:13:46.157000 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 00:13:46.167351 systemd-networkd[2553]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:ed:dc.network. May 17 00:13:46.167639 systemd-networkd[2553]: enP1p1s0f1np1: Link UP May 17 00:13:46.167887 systemd-networkd[2553]: enP1p1s0f1np1: Gained carrier May 17 00:13:46.178170 systemd-networkd[2553]: bond0: Link UP May 17 00:13:46.178451 systemd-networkd[2553]: bond0: Gained carrier May 17 00:13:46.178620 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:46.179153 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:46.179470 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:46.179614 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:46.259491 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 17 00:13:46.259524 kernel: bond0: active interface up! May 17 00:13:46.383999 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 17 00:13:47.318067 coreos-metadata[2605]: May 17 00:13:47.318 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 00:13:47.504371 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:47.571933 coreos-metadata[2681]: May 17 00:13:47.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 00:13:47.632050 systemd-networkd[2553]: bond0: Gained IPv6LL May 17 00:13:47.632294 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:47.634221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:13:47.640084 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:13:47.658219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:47.664702 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:13:47.685985 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:13:48.344422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:48.350380 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:48.746381 kubelet[2742]: E0517 00:13:48.746334 2742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:48.748680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:48.748823 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:49.432227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:13:49.449309 systemd[1]: Started sshd@0-147.28.129.25:22-147.75.109.163:36570.service - OpenSSH per-connection server daemon (147.75.109.163:36570). May 17 00:13:49.849706 sshd[2767]: Accepted publickey for core from 147.75.109.163 port 36570 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:49.852770 sshd[2767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:49.861540 systemd-logind[2631]: New session 1 of user core. May 17 00:13:49.862905 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:13:49.864169 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 17 00:13:49.864386 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 17 00:13:49.890217 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:13:49.917695 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:13:49.935356 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:13:49.942285 (systemd)[2772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:13:50.037381 systemd[2772]: Queued start job for default target default.target. May 17 00:13:50.047206 systemd[2772]: Created slice app.slice - User Application Slice. May 17 00:13:50.047232 systemd[2772]: Reached target paths.target - Paths. May 17 00:13:50.047245 systemd[2772]: Reached target timers.target - Timers. May 17 00:13:50.048468 systemd[2772]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:13:50.057451 systemd[2772]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:13:50.057502 systemd[2772]: Reached target sockets.target - Sockets. May 17 00:13:50.057515 systemd[2772]: Reached target basic.target - Basic System. May 17 00:13:50.057555 systemd[2772]: Reached target default.target - Main User Target. May 17 00:13:50.057579 systemd[2772]: Startup finished in 110ms. May 17 00:13:50.057955 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:13:50.064091 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:13:50.180759 coreos-metadata[2681]: May 17 00:13:50.180 INFO Fetch successful May 17 00:13:50.202822 login[2719]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:50.203805 login[2720]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:50.206065 systemd-logind[2631]: New session 2 of user core. May 17 00:13:50.207457 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:13:50.209225 systemd-logind[2631]: New session 3 of user core. May 17 00:13:50.210471 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:13:50.231111 unknown[2681]: wrote ssh authorized keys file for user: core May 17 00:13:50.254202 update-ssh-keys[2810]: Updated "/home/core/.ssh/authorized_keys" May 17 00:13:50.256017 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:13:50.257533 systemd[1]: Finished sshkeys.service. May 17 00:13:50.277448 coreos-metadata[2605]: May 17 00:13:50.277 INFO Fetch successful May 17 00:13:50.341943 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:13:50.343842 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 17 00:13:50.365224 systemd[1]: Started sshd@1-147.28.129.25:22-147.75.109.163:58254.service - OpenSSH per-connection server daemon (147.75.109.163:58254). May 17 00:13:50.726534 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 17 00:13:50.726971 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:13:50.727749 systemd[1]: Startup finished in 3.223s (kernel) + 20.372s (initrd) + 9.994s (userspace) = 33.590s. May 17 00:13:50.767236 sshd[2823]: Accepted publickey for core from 147.75.109.163 port 58254 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:50.768370 sshd[2823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:50.771422 systemd-logind[2631]: New session 4 of user core. May 17 00:13:50.781099 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:13:51.064309 sshd[2823]: pam_unix(sshd:session): session closed for user core May 17 00:13:51.067006 systemd[1]: sshd@1-147.28.129.25:22-147.75.109.163:58254.service: Deactivated successfully. May 17 00:13:51.069452 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:13:51.069905 systemd-logind[2631]: Session 4 logged out. Waiting for processes to exit. May 17 00:13:51.070464 systemd-logind[2631]: Removed session 4. May 17 00:13:51.143107 systemd[1]: Started sshd@2-147.28.129.25:22-147.75.109.163:58266.service - OpenSSH per-connection server daemon (147.75.109.163:58266). May 17 00:13:51.552025 sshd[2831]: Accepted publickey for core from 147.75.109.163 port 58266 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:51.553160 sshd[2831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:51.555851 systemd-logind[2631]: New session 5 of user core. May 17 00:13:51.565094 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:13:51.850783 sshd[2831]: pam_unix(sshd:session): session closed for user core May 17 00:13:51.853979 systemd[1]: sshd@2-147.28.129.25:22-147.75.109.163:58266.service: Deactivated successfully. May 17 00:13:51.855463 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:13:51.855899 systemd-logind[2631]: Session 5 logged out. Waiting for processes to exit. May 17 00:13:51.856459 systemd-logind[2631]: Removed session 5. May 17 00:13:51.920065 systemd[1]: Started sshd@3-147.28.129.25:22-147.75.109.163:58276.service - OpenSSH per-connection server daemon (147.75.109.163:58276). May 17 00:13:52.328501 sshd[2838]: Accepted publickey for core from 147.75.109.163 port 58276 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:52.329638 sshd[2838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:52.332289 systemd-logind[2631]: New session 6 of user core. May 17 00:13:52.338102 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:13:52.629648 sshd[2838]: pam_unix(sshd:session): session closed for user core May 17 00:13:52.632603 systemd[1]: sshd@3-147.28.129.25:22-147.75.109.163:58276.service: Deactivated successfully. May 17 00:13:52.634169 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:13:52.634641 systemd-logind[2631]: Session 6 logged out. Waiting for processes to exit. May 17 00:13:52.635158 systemd-logind[2631]: Removed session 6. May 17 00:13:52.705084 systemd[1]: Started sshd@4-147.28.129.25:22-147.75.109.163:58290.service - OpenSSH per-connection server daemon (147.75.109.163:58290). May 17 00:13:53.122243 sshd[2845]: Accepted publickey for core from 147.75.109.163 port 58290 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:53.123391 sshd[2845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:53.126250 systemd-logind[2631]: New session 7 of user core. May 17 00:13:53.139151 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:13:53.371228 sudo[2848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:13:53.371496 sudo[2848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:53.384828 sudo[2848]: pam_unix(sudo:session): session closed for user root May 17 00:13:53.451133 sshd[2845]: pam_unix(sshd:session): session closed for user core May 17 00:13:53.455170 systemd[1]: sshd@4-147.28.129.25:22-147.75.109.163:58290.service: Deactivated successfully. May 17 00:13:53.457675 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:13:53.459465 systemd-logind[2631]: Session 7 logged out. Waiting for processes to exit. May 17 00:13:53.460067 systemd-logind[2631]: Removed session 7. May 17 00:13:53.519549 systemd[1]: Started sshd@5-147.28.129.25:22-147.75.109.163:58298.service - OpenSSH per-connection server daemon (147.75.109.163:58298). May 17 00:13:53.912615 sshd[2856]: Accepted publickey for core from 147.75.109.163 port 58298 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:53.913871 sshd[2856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:53.916856 systemd-logind[2631]: New session 8 of user core. May 17 00:13:53.927165 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:13:54.143345 sudo[2860]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:13:54.143620 sudo[2860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:54.146173 sudo[2860]: pam_unix(sudo:session): session closed for user root May 17 00:13:54.150498 sudo[2859]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:13:54.150755 sudo[2859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:54.164246 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:13:54.165297 auditctl[2863]: No rules May 17 00:13:54.166161 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:13:54.166349 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:13:54.168097 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:13:54.191020 augenrules[2881]: No rules May 17 00:13:54.193081 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:13:54.193962 sudo[2859]: pam_unix(sudo:session): session closed for user root May 17 00:13:54.258079 sshd[2856]: pam_unix(sshd:session): session closed for user core May 17 00:13:54.261659 systemd[1]: sshd@5-147.28.129.25:22-147.75.109.163:58298.service: Deactivated successfully. May 17 00:13:54.263167 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:13:54.263651 systemd-logind[2631]: Session 8 logged out. Waiting for processes to exit. May 17 00:13:54.264206 systemd-logind[2631]: Removed session 8. May 17 00:13:54.327152 systemd[1]: Started sshd@6-147.28.129.25:22-147.75.109.163:58310.service - OpenSSH per-connection server daemon (147.75.109.163:58310). May 17 00:13:54.730812 sshd[2889]: Accepted publickey for core from 147.75.109.163 port 58310 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:54.731987 sshd[2889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:54.734933 systemd-logind[2631]: New session 9 of user core. May 17 00:13:54.752089 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:13:54.966612 sudo[2892]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:13:54.966886 sudo[2892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:55.238182 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:13:55.238412 (dockerd)[2923]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:13:55.450023 dockerd[2923]: time="2025-05-17T00:13:55.449964400Z" level=info msg="Starting up" May 17 00:13:55.504132 systemd[1]: var-lib-docker-metacopy\x2dcheck2505989089-merged.mount: Deactivated successfully. May 17 00:13:55.508526 dockerd[2923]: time="2025-05-17T00:13:55.508494680Z" level=info msg="Loading containers: start." May 17 00:13:55.589997 kernel: Initializing XFRM netlink socket May 17 00:13:55.607606 systemd-timesyncd[2555]: Network configuration changed, trying to establish connection. May 17 00:13:55.650789 systemd-networkd[2553]: docker0: Link UP May 17 00:13:55.666089 dockerd[2923]: time="2025-05-17T00:13:55.666056480Z" level=info msg="Loading containers: done." May 17 00:13:55.674813 dockerd[2923]: time="2025-05-17T00:13:55.674783200Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:13:55.674879 dockerd[2923]: time="2025-05-17T00:13:55.674857840Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:13:55.674967 dockerd[2923]: time="2025-05-17T00:13:55.674951720Z" level=info msg="Daemon has completed initialization" May 17 00:13:55.693831 dockerd[2923]: time="2025-05-17T00:13:55.693714600Z" level=info msg="API listen on /run/docker.sock" May 17 00:13:55.693831 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:13:55.151312 systemd-resolved[2554]: Clock change detected. Flushing caches. May 17 00:13:55.159208 systemd-journald[2177]: Time jumped backwards, rotating. May 17 00:13:55.151534 systemd-timesyncd[2555]: Contacted time server [2606:82c0:22::e]:123 (2.flatcar.pool.ntp.org). May 17 00:13:55.151582 systemd-timesyncd[2555]: Initial clock synchronization to Sat 2025-05-17 00:13:55.151264 UTC. May 17 00:13:55.459591 containerd[2643]: time="2025-05-17T00:13:55.459523366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:13:55.650783 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3575457583-merged.mount: Deactivated successfully. May 17 00:13:56.063825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178909642.mount: Deactivated successfully. May 17 00:13:57.350345 containerd[2643]: time="2025-05-17T00:13:57.350295446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:57.350674 containerd[2643]: time="2025-05-17T00:13:57.350345966Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326311" May 17 00:13:57.351413 containerd[2643]: time="2025-05-17T00:13:57.351387606Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:57.354310 containerd[2643]: time="2025-05-17T00:13:57.354281886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:57.355432 containerd[2643]: time="2025-05-17T00:13:57.355414566Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.89583524s" May 17 00:13:57.355458 containerd[2643]: time="2025-05-17T00:13:57.355441606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 17 00:13:57.356062 containerd[2643]: time="2025-05-17T00:13:57.356040206Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:13:58.019465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:13:58.029031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:58.132961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:58.136534 (kubelet)[3196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:58.169208 kubelet[3196]: E0517 00:13:58.169173 3196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:58.172077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:58.172212 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:58.813271 containerd[2643]: time="2025-05-17T00:13:58.813233246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:58.813538 containerd[2643]: time="2025-05-17T00:13:58.813295086Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530547" May 17 00:13:58.814320 containerd[2643]: time="2025-05-17T00:13:58.814294886Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:58.817086 containerd[2643]: time="2025-05-17T00:13:58.817066726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:58.818279 containerd[2643]: time="2025-05-17T00:13:58.818251006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.46217804s" May 17 00:13:58.818297 containerd[2643]: time="2025-05-17T00:13:58.818286406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 17 00:13:58.818682 containerd[2643]: time="2025-05-17T00:13:58.818662526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:14:00.346595 containerd[2643]: time="2025-05-17T00:14:00.346555126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:00.346840 containerd[2643]: time="2025-05-17T00:14:00.346583246Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484190" May 17 00:14:00.347679 containerd[2643]: time="2025-05-17T00:14:00.347653366Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:00.350445 containerd[2643]: time="2025-05-17T00:14:00.350419126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:00.351548 containerd[2643]: time="2025-05-17T00:14:00.351521286Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.53282716s" May 17 00:14:00.351567 containerd[2643]: time="2025-05-17T00:14:00.351556406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 17 00:14:00.351981 containerd[2643]: time="2025-05-17T00:14:00.351961726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:14:01.228044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560795470.mount: Deactivated successfully. May 17 00:14:01.496480 containerd[2643]: time="2025-05-17T00:14:01.496394206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:01.496735 containerd[2643]: time="2025-05-17T00:14:01.496469846Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377375" May 17 00:14:01.497181 containerd[2643]: time="2025-05-17T00:14:01.497158886Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:01.498817 containerd[2643]: time="2025-05-17T00:14:01.498788206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:01.499605 containerd[2643]: time="2025-05-17T00:14:01.499578886Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.14758412s" May 17 00:14:01.499632 containerd[2643]: time="2025-05-17T00:14:01.499611366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 17 00:14:01.499989 containerd[2643]: time="2025-05-17T00:14:01.499969726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:14:01.861280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872951863.mount: Deactivated successfully. May 17 00:14:02.293503 containerd[2643]: time="2025-05-17T00:14:02.293439766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:02.293631 containerd[2643]: time="2025-05-17T00:14:02.293462926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 17 00:14:02.294708 containerd[2643]: time="2025-05-17T00:14:02.294678366Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:02.299590 containerd[2643]: time="2025-05-17T00:14:02.299561126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:02.300797 containerd[2643]: time="2025-05-17T00:14:02.300761566Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 800.75496ms" May 17 00:14:02.300820 containerd[2643]: time="2025-05-17T00:14:02.300805046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:14:02.301127 containerd[2643]: time="2025-05-17T00:14:02.301110926Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:14:02.523109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391651506.mount: Deactivated successfully. May 17 00:14:02.523643 containerd[2643]: time="2025-05-17T00:14:02.523609246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:02.523792 containerd[2643]: time="2025-05-17T00:14:02.523627606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 17 00:14:02.524384 containerd[2643]: time="2025-05-17T00:14:02.524362366Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:02.526386 containerd[2643]: time="2025-05-17T00:14:02.526361806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:02.527272 containerd[2643]: time="2025-05-17T00:14:02.527250366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 226.11392ms" May 17 00:14:02.527293 containerd[2643]: time="2025-05-17T00:14:02.527277366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:14:02.527611 containerd[2643]: time="2025-05-17T00:14:02.527592086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:14:02.902920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023037225.mount: Deactivated successfully. May 17 00:14:06.264362 containerd[2643]: time="2025-05-17T00:14:06.264311486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:06.264700 containerd[2643]: time="2025-05-17T00:14:06.264369406Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 17 00:14:06.265491 containerd[2643]: time="2025-05-17T00:14:06.265467006Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:06.268693 containerd[2643]: time="2025-05-17T00:14:06.268669886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:06.269931 containerd[2643]: time="2025-05-17T00:14:06.269896126Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.74226592s" May 17 00:14:06.269965 containerd[2643]: time="2025-05-17T00:14:06.269935086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 17 00:14:08.269452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:14:08.279026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:08.382827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:08.386437 (kubelet)[3431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:14:08.425741 kubelet[3431]: E0517 00:14:08.425706 3431 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:14:08.427875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:14:08.428016 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:14:11.252308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:11.265172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:11.282359 systemd[1]: Reloading requested from client PID 3463 ('systemctl') (unit session-9.scope)... May 17 00:14:11.282370 systemd[1]: Reloading... May 17 00:14:11.346905 zram_generator::config[3505]: No configuration found. May 17 00:14:11.437973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:14:11.510129 systemd[1]: Reloading finished in 227 ms. May 17 00:14:11.562764 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:11.565007 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:14:11.565208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:11.566756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:11.674432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:11.678139 (kubelet)[3570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:14:11.708491 kubelet[3570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:11.708491 kubelet[3570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:14:11.708491 kubelet[3570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:11.708660 kubelet[3570]: I0517 00:14:11.708570 3570 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:14:12.606621 kubelet[3570]: I0517 00:14:12.606588 3570 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:14:12.606621 kubelet[3570]: I0517 00:14:12.606616 3570 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:14:12.606872 kubelet[3570]: I0517 00:14:12.606854 3570 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:14:12.626599 kubelet[3570]: E0517 00:14:12.626573 3570 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.129.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:12.628963 kubelet[3570]: I0517 00:14:12.628932 3570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:12.633491 kubelet[3570]: E0517 00:14:12.633469 3570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:14:12.633518 kubelet[3570]: I0517 00:14:12.633491 3570 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:14:12.653551 kubelet[3570]: I0517 00:14:12.653523 3570 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:14:12.654153 kubelet[3570]: I0517 00:14:12.654117 3570 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:14:12.654319 kubelet[3570]: I0517 00:14:12.654155 3570 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-02409cc2a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:14:12.654402 kubelet[3570]: I0517 00:14:12.654392 3570 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:14:12.654402 kubelet[3570]: I0517 00:14:12.654402 3570 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:14:12.654623 kubelet[3570]: I0517 00:14:12.654612 3570 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:12.657424 kubelet[3570]: I0517 00:14:12.657403 3570 kubelet.go:446] "Attempting to sync node with API server" May 17 00:14:12.657467 kubelet[3570]: I0517 00:14:12.657428 3570 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:14:12.657467 kubelet[3570]: I0517 00:14:12.657445 3570 kubelet.go:352] "Adding apiserver pod source" May 17 00:14:12.657467 kubelet[3570]: I0517 00:14:12.657460 3570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:14:12.659980 kubelet[3570]: I0517 00:14:12.659960 3570 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:14:12.660542 kubelet[3570]: I0517 00:14:12.660528 3570 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:14:12.660652 kubelet[3570]: W0517 00:14:12.660641 3570 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:14:12.661658 kubelet[3570]: I0517 00:14:12.661642 3570 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:14:12.663540 kubelet[3570]: I0517 00:14:12.663516 3570 server.go:1287] "Started kubelet" May 17 00:14:12.663841 kubelet[3570]: I0517 00:14:12.663791 3570 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:14:12.663947 kubelet[3570]: W0517 00:14:12.663885 3570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.129.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 17 00:14:12.663995 kubelet[3570]: W0517 00:14:12.663947 3570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.129.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-02409cc2a5&limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 17 00:14:12.663995 kubelet[3570]: E0517 00:14:12.663985 3570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.129.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:12.664035 kubelet[3570]: E0517 00:14:12.664009 3570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.129.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-02409cc2a5&limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:12.664928 kubelet[3570]: I0517 00:14:12.664862 3570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:14:12.665181 kubelet[3570]: I0517 00:14:12.665166 3570 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:14:12.666813 kubelet[3570]: I0517 00:14:12.666794 3570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:14:12.666839 kubelet[3570]: I0517 00:14:12.666826 3570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:14:12.666919 kubelet[3570]: I0517 00:14:12.666902 3570 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:14:12.666961 kubelet[3570]: I0517 00:14:12.666946 3570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:14:12.667039 kubelet[3570]: I0517 00:14:12.667016 3570 reconciler.go:26] "Reconciler: start to sync state" May 17 00:14:12.667062 kubelet[3570]: E0517 00:14:12.667026 3570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" May 17 00:14:12.667105 kubelet[3570]: E0517 00:14:12.666843 3570 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.129.25:6443/api/v1/namespaces/default/events\": dial tcp 147.28.129.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-02409cc2a5.1840283278e7562e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-02409cc2a5,UID:ci-4081.3.3-n-02409cc2a5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-02409cc2a5,},FirstTimestamp:2025-05-17 00:14:12.661655086 +0000 UTC m=+0.980544281,LastTimestamp:2025-05-17 00:14:12.661655086 +0000 UTC m=+0.980544281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-02409cc2a5,}" May 17 00:14:12.667219 kubelet[3570]: E0517 00:14:12.667191 3570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.129.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-02409cc2a5?timeout=10s\": dial tcp 147.28.129.25:6443: connect: connection refused" interval="200ms" May 17 00:14:12.667269 kubelet[3570]: W0517 00:14:12.667232 3570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 17 00:14:12.667295 kubelet[3570]: E0517 00:14:12.667280 3570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:12.667315 kubelet[3570]: I0517 00:14:12.667299 3570 factory.go:221] Registration of the systemd container factory successfully May 17 00:14:12.667415 kubelet[3570]: I0517 00:14:12.667400 3570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:14:12.667696 kubelet[3570]: I0517 00:14:12.667678 3570 server.go:479] "Adding debug handlers to kubelet server" May 17 00:14:12.668150 kubelet[3570]: I0517 00:14:12.668130 3570 factory.go:221] Registration of the containerd container factory successfully May 17 00:14:12.668237 kubelet[3570]: E0517 00:14:12.668219 3570 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:14:12.680240 kubelet[3570]: I0517 00:14:12.680205 3570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:14:12.681173 kubelet[3570]: I0517 00:14:12.681160 3570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:14:12.681194 kubelet[3570]: I0517 00:14:12.681177 3570 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:14:12.681215 kubelet[3570]: I0517 00:14:12.681196 3570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:14:12.681215 kubelet[3570]: I0517 00:14:12.681203 3570 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:14:12.681251 kubelet[3570]: E0517 00:14:12.681239 3570 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:14:12.681625 kubelet[3570]: W0517 00:14:12.681587 3570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.129.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 17 00:14:12.681649 kubelet[3570]: E0517 00:14:12.681638 3570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.129.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:12.683717 kubelet[3570]: I0517 00:14:12.683705 3570 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:14:12.683740 kubelet[3570]: I0517 00:14:12.683717 3570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:14:12.683740 kubelet[3570]: I0517 00:14:12.683733 3570 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:12.684330 kubelet[3570]: I0517 00:14:12.684320 3570 policy_none.go:49] "None policy: Start" May 17 00:14:12.684355 kubelet[3570]: I0517 00:14:12.684335 3570 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:14:12.684355 kubelet[3570]: I0517 00:14:12.684345 3570 state_mem.go:35] "Initializing new in-memory state store" May 17 00:14:12.687817 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:14:12.705185 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:14:12.722213 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:14:12.723031 kubelet[3570]: I0517 00:14:12.723009 3570 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:14:12.723210 kubelet[3570]: I0517 00:14:12.723199 3570 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:14:12.723242 kubelet[3570]: I0517 00:14:12.723212 3570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:14:12.723371 kubelet[3570]: I0517 00:14:12.723358 3570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:14:12.723806 kubelet[3570]: E0517 00:14:12.723788 3570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:14:12.723845 kubelet[3570]: E0517 00:14:12.723836 3570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-02409cc2a5\" not found" May 17 00:14:12.789069 systemd[1]: Created slice kubepods-burstable-pod8ee5832dfc52036b8d914444e7e09051.slice - libcontainer container kubepods-burstable-pod8ee5832dfc52036b8d914444e7e09051.slice. May 17 00:14:12.805551 kubelet[3570]: E0517 00:14:12.805525 3570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.807061 systemd[1]: Created slice kubepods-burstable-podb33a9e939475e9b6809e4eb7c169bb90.slice - libcontainer container kubepods-burstable-podb33a9e939475e9b6809e4eb7c169bb90.slice. May 17 00:14:12.821852 kubelet[3570]: E0517 00:14:12.821834 3570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.824024 systemd[1]: Created slice kubepods-burstable-poddb74953a23ac9feb3e46d3ce0576a7a4.slice - libcontainer container kubepods-burstable-poddb74953a23ac9feb3e46d3ce0576a7a4.slice. May 17 00:14:12.824782 kubelet[3570]: I0517 00:14:12.824764 3570 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.825163 kubelet[3570]: E0517 00:14:12.825136 3570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.129.25:6443/api/v1/nodes\": dial tcp 147.28.129.25:6443: connect: connection refused" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.825227 kubelet[3570]: E0517 00:14:12.825212 3570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.867598 kubelet[3570]: E0517 00:14:12.867543 3570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.129.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-02409cc2a5?timeout=10s\": dial tcp 147.28.129.25:6443: connect: connection refused" interval="400ms" May 17 00:14:12.869770 kubelet[3570]: I0517 00:14:12.869741 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db74953a23ac9feb3e46d3ce0576a7a4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-02409cc2a5\" (UID: \"db74953a23ac9feb3e46d3ce0576a7a4\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869821 kubelet[3570]: I0517 00:14:12.869772 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ee5832dfc52036b8d914444e7e09051-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" (UID: \"8ee5832dfc52036b8d914444e7e09051\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869821 kubelet[3570]: I0517 00:14:12.869794 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ee5832dfc52036b8d914444e7e09051-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" (UID: \"8ee5832dfc52036b8d914444e7e09051\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869821 kubelet[3570]: I0517 00:14:12.869811 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ee5832dfc52036b8d914444e7e09051-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" (UID: \"8ee5832dfc52036b8d914444e7e09051\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869938 kubelet[3570]: I0517 00:14:12.869833 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869938 kubelet[3570]: I0517 00:14:12.869853 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869938 kubelet[3570]: I0517 00:14:12.869868 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869938 kubelet[3570]: I0517 00:14:12.869890 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:12.869938 kubelet[3570]: I0517 00:14:12.869912 3570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:13.027625 kubelet[3570]: I0517 00:14:13.027604 3570 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:13.027899 kubelet[3570]: E0517 00:14:13.027867 3570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.129.25:6443/api/v1/nodes\": dial tcp 147.28.129.25:6443: connect: connection refused" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:13.106761 containerd[2643]: time="2025-05-17T00:14:13.106731126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-02409cc2a5,Uid:8ee5832dfc52036b8d914444e7e09051,Namespace:kube-system,Attempt:0,}" May 17 00:14:13.123299 containerd[2643]: time="2025-05-17T00:14:13.123246086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-02409cc2a5,Uid:b33a9e939475e9b6809e4eb7c169bb90,Namespace:kube-system,Attempt:0,}" May 17 00:14:13.126795 containerd[2643]: time="2025-05-17T00:14:13.126764486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-02409cc2a5,Uid:db74953a23ac9feb3e46d3ce0576a7a4,Namespace:kube-system,Attempt:0,}" May 17 00:14:13.267942 kubelet[3570]: E0517 00:14:13.267889 3570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.129.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-02409cc2a5?timeout=10s\": dial tcp 147.28.129.25:6443: connect: connection refused" interval="800ms" May 17 00:14:13.429957 kubelet[3570]: I0517 00:14:13.429908 3570 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:13.430217 kubelet[3570]: E0517 00:14:13.430192 3570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.129.25:6443/api/v1/nodes\": dial tcp 147.28.129.25:6443: connect: connection refused" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:13.498837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535152554.mount: Deactivated successfully. May 17 00:14:13.499927 containerd[2643]: time="2025-05-17T00:14:13.499905126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:13.500476 containerd[2643]: time="2025-05-17T00:14:13.500451486Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 17 00:14:13.500600 containerd[2643]: time="2025-05-17T00:14:13.500576286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:14:13.500670 containerd[2643]: time="2025-05-17T00:14:13.500652686Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:13.500870 containerd[2643]: time="2025-05-17T00:14:13.500848966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:14:13.501334 containerd[2643]: time="2025-05-17T00:14:13.501318486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:13.504728 containerd[2643]: time="2025-05-17T00:14:13.504700886Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:13.505519 containerd[2643]: time="2025-05-17T00:14:13.505492366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 398.69248ms" May 17 00:14:13.507232 containerd[2643]: time="2025-05-17T00:14:13.507208046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:13.508020 containerd[2643]: time="2025-05-17T00:14:13.507996726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 384.68016ms" May 17 00:14:13.508623 containerd[2643]: time="2025-05-17T00:14:13.508602286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 381.78084ms" May 17 00:14:13.553315 kubelet[3570]: W0517 00:14:13.553266 3570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.129.25:6443: connect: connection refused May 17 00:14:13.553372 kubelet[3570]: E0517 00:14:13.553324 3570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.129.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.129.25:6443: connect: connection refused" logger="UnhandledError" May 17 00:14:13.617770 containerd[2643]: time="2025-05-17T00:14:13.617689326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:13.617770 containerd[2643]: time="2025-05-17T00:14:13.617747166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:13.617770 containerd[2643]: time="2025-05-17T00:14:13.617761086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:13.617770 containerd[2643]: time="2025-05-17T00:14:13.617440366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:13.617770 containerd[2643]: time="2025-05-17T00:14:13.617766766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:13.617908 containerd[2643]: time="2025-05-17T00:14:13.617780526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:13.617908 containerd[2643]: time="2025-05-17T00:14:13.617813406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:13.617908 containerd[2643]: time="2025-05-17T00:14:13.617867686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:13.617908 containerd[2643]: time="2025-05-17T00:14:13.617879686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:13.618560 containerd[2643]: time="2025-05-17T00:14:13.618415686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:13.618560 containerd[2643]: time="2025-05-17T00:14:13.618426886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:13.618560 containerd[2643]: time="2025-05-17T00:14:13.618438766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:13.642018 systemd[1]: Started cri-containerd-084ee0902f19be7a67899cd4faa7f00506205349d7ba5e1b1aa766ad4bb0986c.scope - libcontainer container 084ee0902f19be7a67899cd4faa7f00506205349d7ba5e1b1aa766ad4bb0986c. May 17 00:14:13.643271 systemd[1]: Started cri-containerd-85916586812360a89beaad4b6088a9fcf3b3695b3cdf4c911b9c4c521bf34c4b.scope - libcontainer container 85916586812360a89beaad4b6088a9fcf3b3695b3cdf4c911b9c4c521bf34c4b. May 17 00:14:13.644534 systemd[1]: Started cri-containerd-d67bad3a377c8cce99382f0a8cf3c2500ed189ce849827b1fd61704b3eaa6274.scope - libcontainer container d67bad3a377c8cce99382f0a8cf3c2500ed189ce849827b1fd61704b3eaa6274. May 17 00:14:13.666097 containerd[2643]: time="2025-05-17T00:14:13.666061886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-02409cc2a5,Uid:b33a9e939475e9b6809e4eb7c169bb90,Namespace:kube-system,Attempt:0,} returns sandbox id \"084ee0902f19be7a67899cd4faa7f00506205349d7ba5e1b1aa766ad4bb0986c\"" May 17 00:14:13.666159 containerd[2643]: time="2025-05-17T00:14:13.666109006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-02409cc2a5,Uid:db74953a23ac9feb3e46d3ce0576a7a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"85916586812360a89beaad4b6088a9fcf3b3695b3cdf4c911b9c4c521bf34c4b\"" May 17 00:14:13.667361 containerd[2643]: time="2025-05-17T00:14:13.667336926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-02409cc2a5,Uid:8ee5832dfc52036b8d914444e7e09051,Namespace:kube-system,Attempt:0,} returns sandbox id \"d67bad3a377c8cce99382f0a8cf3c2500ed189ce849827b1fd61704b3eaa6274\"" May 17 00:14:13.668372 containerd[2643]: time="2025-05-17T00:14:13.668345846Z" level=info msg="CreateContainer within sandbox \"084ee0902f19be7a67899cd4faa7f00506205349d7ba5e1b1aa766ad4bb0986c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:14:13.668431 containerd[2643]: time="2025-05-17T00:14:13.668406006Z" level=info msg="CreateContainer within sandbox \"85916586812360a89beaad4b6088a9fcf3b3695b3cdf4c911b9c4c521bf34c4b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:14:13.668816 containerd[2643]: time="2025-05-17T00:14:13.668796686Z" level=info msg="CreateContainer within sandbox \"d67bad3a377c8cce99382f0a8cf3c2500ed189ce849827b1fd61704b3eaa6274\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:14:13.687633 containerd[2643]: time="2025-05-17T00:14:13.687539086Z" level=info msg="CreateContainer within sandbox \"85916586812360a89beaad4b6088a9fcf3b3695b3cdf4c911b9c4c521bf34c4b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd6a9bfcaff72494ba24bafb41397b21a45e3d1032d4bd8c67ab6670fd87ded6\"" May 17 00:14:13.687961 containerd[2643]: time="2025-05-17T00:14:13.687937486Z" level=info msg="StartContainer for \"dd6a9bfcaff72494ba24bafb41397b21a45e3d1032d4bd8c67ab6670fd87ded6\"" May 17 00:14:13.688040 containerd[2643]: time="2025-05-17T00:14:13.688011046Z" level=info msg="CreateContainer within sandbox \"d67bad3a377c8cce99382f0a8cf3c2500ed189ce849827b1fd61704b3eaa6274\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1ae87e462d79f8f88f8716a43a484726a950da24858d1d80ef7534b56e4ba73\"" May 17 00:14:13.688213 containerd[2643]: time="2025-05-17T00:14:13.688184006Z" level=info msg="CreateContainer within sandbox \"084ee0902f19be7a67899cd4faa7f00506205349d7ba5e1b1aa766ad4bb0986c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14ffc50af0f0509b6d109d266b97c3d9487f4a113d166b486b891c336ef4aadb\"" May 17 00:14:13.688270 containerd[2643]: time="2025-05-17T00:14:13.688252046Z" level=info msg="StartContainer for \"b1ae87e462d79f8f88f8716a43a484726a950da24858d1d80ef7534b56e4ba73\"" May 17 00:14:13.688460 containerd[2643]: time="2025-05-17T00:14:13.688441126Z" level=info msg="StartContainer for \"14ffc50af0f0509b6d109d266b97c3d9487f4a113d166b486b891c336ef4aadb\"" May 17 00:14:13.725079 systemd[1]: Started cri-containerd-14ffc50af0f0509b6d109d266b97c3d9487f4a113d166b486b891c336ef4aadb.scope - libcontainer container 14ffc50af0f0509b6d109d266b97c3d9487f4a113d166b486b891c336ef4aadb. May 17 00:14:13.726247 systemd[1]: Started cri-containerd-b1ae87e462d79f8f88f8716a43a484726a950da24858d1d80ef7534b56e4ba73.scope - libcontainer container b1ae87e462d79f8f88f8716a43a484726a950da24858d1d80ef7534b56e4ba73. May 17 00:14:13.727349 systemd[1]: Started cri-containerd-dd6a9bfcaff72494ba24bafb41397b21a45e3d1032d4bd8c67ab6670fd87ded6.scope - libcontainer container dd6a9bfcaff72494ba24bafb41397b21a45e3d1032d4bd8c67ab6670fd87ded6. May 17 00:14:13.749734 containerd[2643]: time="2025-05-17T00:14:13.749705366Z" level=info msg="StartContainer for \"14ffc50af0f0509b6d109d266b97c3d9487f4a113d166b486b891c336ef4aadb\" returns successfully" May 17 00:14:13.750651 containerd[2643]: time="2025-05-17T00:14:13.750626206Z" level=info msg="StartContainer for \"b1ae87e462d79f8f88f8716a43a484726a950da24858d1d80ef7534b56e4ba73\" returns successfully" May 17 00:14:13.752798 containerd[2643]: time="2025-05-17T00:14:13.752773606Z" level=info msg="StartContainer for \"dd6a9bfcaff72494ba24bafb41397b21a45e3d1032d4bd8c67ab6670fd87ded6\" returns successfully" May 17 00:14:14.232264 kubelet[3570]: I0517 00:14:14.232235 3570 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:14.689150 kubelet[3570]: E0517 00:14:14.689117 3570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:14.689150 kubelet[3570]: E0517 00:14:14.689129 3570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:14.690191 kubelet[3570]: E0517 00:14:14.690172 3570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:14.876386 kubelet[3570]: E0517 00:14:14.876339 3570 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-02409cc2a5\" not found" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:14.974557 kubelet[3570]: I0517 00:14:14.974476 3570 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.067317 kubelet[3570]: I0517 00:14:15.067272 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.073458 kubelet[3570]: E0517 00:14:15.073425 3570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.073502 kubelet[3570]: I0517 00:14:15.073457 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.074847 kubelet[3570]: E0517 00:14:15.074830 3570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.074847 kubelet[3570]: I0517 00:14:15.074846 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.076147 kubelet[3570]: E0517 00:14:15.076121 3570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-02409cc2a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.659801 kubelet[3570]: I0517 00:14:15.659775 3570 apiserver.go:52] "Watching apiserver" May 17 00:14:15.667800 kubelet[3570]: I0517 00:14:15.667783 3570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:14:15.690053 kubelet[3570]: I0517 00:14:15.690033 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.690162 kubelet[3570]: I0517 00:14:15.690146 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.690267 kubelet[3570]: I0517 00:14:15.690250 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.691665 kubelet[3570]: E0517 00:14:15.691647 3570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.691702 kubelet[3570]: E0517 00:14:15.691676 3570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:15.691961 kubelet[3570]: E0517 00:14:15.691943 3570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-02409cc2a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:16.611169 systemd[1]: Reloading requested from client PID 3994 ('systemctl') (unit session-9.scope)... May 17 00:14:16.611179 systemd[1]: Reloading... May 17 00:14:16.681922 zram_generator::config[4037]: No configuration found. May 17 00:14:16.691298 kubelet[3570]: I0517 00:14:16.691277 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:16.691538 kubelet[3570]: I0517 00:14:16.691386 3570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:16.694384 kubelet[3570]: W0517 00:14:16.694357 3570 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:16.694447 kubelet[3570]: W0517 00:14:16.694433 3570 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:16.772560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:14:16.856108 systemd[1]: Reloading finished in 244 ms. May 17 00:14:16.889367 kubelet[3570]: I0517 00:14:16.889292 3570 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:16.889591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:16.907785 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:14:16.908972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:16.909027 systemd[1]: kubelet.service: Consumed 1.454s CPU time, 152.6M memory peak, 0B memory swap peak. May 17 00:14:16.925067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:17.032366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:17.036062 (kubelet)[4097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:14:17.067189 kubelet[4097]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:17.067189 kubelet[4097]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:14:17.067189 kubelet[4097]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:17.067421 kubelet[4097]: I0517 00:14:17.067259 4097 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:14:17.072365 kubelet[4097]: I0517 00:14:17.072342 4097 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:14:17.072395 kubelet[4097]: I0517 00:14:17.072366 4097 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:14:17.072597 kubelet[4097]: I0517 00:14:17.072584 4097 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:14:17.073714 kubelet[4097]: I0517 00:14:17.073701 4097 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:14:17.076831 kubelet[4097]: I0517 00:14:17.076808 4097 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:17.079069 kubelet[4097]: E0517 00:14:17.079044 4097 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:14:17.079069 kubelet[4097]: I0517 00:14:17.079071 4097 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:14:17.097917 kubelet[4097]: I0517 00:14:17.097888 4097 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:14:17.098101 kubelet[4097]: I0517 00:14:17.098078 4097 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:14:17.098253 kubelet[4097]: I0517 00:14:17.098103 4097 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-02409cc2a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:14:17.098323 kubelet[4097]: I0517 00:14:17.098262 4097 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:14:17.098323 kubelet[4097]: I0517 00:14:17.098270 4097 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:14:17.098363 kubelet[4097]: I0517 00:14:17.098330 4097 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:17.098640 kubelet[4097]: I0517 00:14:17.098628 4097 kubelet.go:446] "Attempting to sync node with API server" May 17 00:14:17.098664 kubelet[4097]: I0517 00:14:17.098645 4097 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:14:17.098664 kubelet[4097]: I0517 00:14:17.098662 4097 kubelet.go:352] "Adding apiserver pod source" May 17 00:14:17.098702 kubelet[4097]: I0517 00:14:17.098671 4097 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:14:17.099243 kubelet[4097]: I0517 00:14:17.099222 4097 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:14:17.099657 kubelet[4097]: I0517 00:14:17.099644 4097 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:14:17.100019 kubelet[4097]: I0517 00:14:17.100006 4097 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:14:17.100039 kubelet[4097]: I0517 00:14:17.100034 4097 server.go:1287] "Started kubelet" May 17 00:14:17.100110 kubelet[4097]: I0517 00:14:17.100075 4097 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:14:17.100191 kubelet[4097]: I0517 00:14:17.100125 4097 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:14:17.100586 kubelet[4097]: I0517 00:14:17.100370 4097 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:14:17.101048 kubelet[4097]: I0517 00:14:17.100985 4097 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:14:17.101048 kubelet[4097]: I0517 00:14:17.100988 4097 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:14:17.101131 kubelet[4097]: E0517 00:14:17.101084 4097 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-02409cc2a5\" not found" May 17 00:14:17.101131 kubelet[4097]: I0517 00:14:17.101110 4097 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:14:17.101202 kubelet[4097]: I0517 00:14:17.101181 4097 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:14:17.101302 kubelet[4097]: I0517 00:14:17.101284 4097 reconciler.go:26] "Reconciler: start to sync state" May 17 00:14:17.102701 kubelet[4097]: E0517 00:14:17.102670 4097 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:14:17.103758 kubelet[4097]: I0517 00:14:17.103735 4097 server.go:479] "Adding debug handlers to kubelet server" May 17 00:14:17.103945 kubelet[4097]: I0517 00:14:17.103928 4097 factory.go:221] Registration of the containerd container factory successfully May 17 00:14:17.103945 kubelet[4097]: I0517 00:14:17.103946 4097 factory.go:221] Registration of the systemd container factory successfully May 17 00:14:17.104052 kubelet[4097]: I0517 00:14:17.104033 4097 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:14:17.109672 kubelet[4097]: I0517 00:14:17.109642 4097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:14:17.110764 kubelet[4097]: I0517 00:14:17.110703 4097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:14:17.110805 kubelet[4097]: I0517 00:14:17.110768 4097 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:14:17.110805 kubelet[4097]: I0517 00:14:17.110797 4097 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:14:17.110903 kubelet[4097]: I0517 00:14:17.110809 4097 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:14:17.111020 kubelet[4097]: E0517 00:14:17.110960 4097 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:14:17.132385 kubelet[4097]: I0517 00:14:17.132363 4097 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:14:17.132385 kubelet[4097]: I0517 00:14:17.132378 4097 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:14:17.132475 kubelet[4097]: I0517 00:14:17.132394 4097 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:17.132541 kubelet[4097]: I0517 00:14:17.132526 4097 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:14:17.132568 kubelet[4097]: I0517 00:14:17.132537 4097 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:14:17.132568 kubelet[4097]: I0517 00:14:17.132555 4097 policy_none.go:49] "None policy: Start" May 17 00:14:17.132568 kubelet[4097]: I0517 00:14:17.132563 4097 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:14:17.132620 kubelet[4097]: I0517 00:14:17.132571 4097 state_mem.go:35] "Initializing new in-memory state store" May 17 00:14:17.132669 kubelet[4097]: I0517 00:14:17.132661 4097 state_mem.go:75] "Updated machine memory state" May 17 00:14:17.135574 kubelet[4097]: I0517 00:14:17.135558 4097 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:14:17.135738 kubelet[4097]: I0517 00:14:17.135726 4097 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:14:17.135784 kubelet[4097]: I0517 00:14:17.135739 4097 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:14:17.135884 kubelet[4097]: I0517 00:14:17.135870 4097 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:14:17.136802 kubelet[4097]: E0517 00:14:17.136767 4097 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:14:17.211782 kubelet[4097]: I0517 00:14:17.211694 4097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.211782 kubelet[4097]: I0517 00:14:17.211738 4097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.211782 kubelet[4097]: I0517 00:14:17.211762 4097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.224082 kubelet[4097]: W0517 00:14:17.224064 4097 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:17.224117 kubelet[4097]: W0517 00:14:17.224065 4097 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:17.224159 kubelet[4097]: E0517 00:14:17.224140 4097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-02409cc2a5\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.224231 kubelet[4097]: W0517 00:14:17.224217 4097 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:17.224276 kubelet[4097]: E0517 00:14:17.224263 4097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.239045 kubelet[4097]: I0517 00:14:17.239029 4097 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.242842 kubelet[4097]: I0517 00:14:17.242815 4097 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.242916 kubelet[4097]: I0517 00:14:17.242876 4097 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302593 kubelet[4097]: I0517 00:14:17.302554 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302593 kubelet[4097]: I0517 00:14:17.302589 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db74953a23ac9feb3e46d3ce0576a7a4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-02409cc2a5\" (UID: \"db74953a23ac9feb3e46d3ce0576a7a4\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302761 kubelet[4097]: I0517 00:14:17.302609 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302761 kubelet[4097]: I0517 00:14:17.302625 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302761 kubelet[4097]: I0517 00:14:17.302685 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302761 kubelet[4097]: I0517 00:14:17.302734 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b33a9e939475e9b6809e4eb7c169bb90-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-02409cc2a5\" (UID: \"b33a9e939475e9b6809e4eb7c169bb90\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302887 kubelet[4097]: I0517 00:14:17.302770 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ee5832dfc52036b8d914444e7e09051-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" (UID: \"8ee5832dfc52036b8d914444e7e09051\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302887 kubelet[4097]: I0517 00:14:17.302799 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ee5832dfc52036b8d914444e7e09051-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" (UID: \"8ee5832dfc52036b8d914444e7e09051\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:17.302887 kubelet[4097]: I0517 00:14:17.302826 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ee5832dfc52036b8d914444e7e09051-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" (UID: \"8ee5832dfc52036b8d914444e7e09051\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:18.099520 kubelet[4097]: I0517 00:14:18.099489 4097 apiserver.go:52] "Watching apiserver" May 17 00:14:18.101697 kubelet[4097]: I0517 00:14:18.101680 4097 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:14:18.116976 kubelet[4097]: I0517 00:14:18.116959 4097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:18.117031 kubelet[4097]: I0517 00:14:18.116999 4097 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:18.119848 kubelet[4097]: W0517 00:14:18.119788 4097 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:18.119936 kubelet[4097]: E0517 00:14:18.119917 4097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-02409cc2a5\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" May 17 00:14:18.120301 kubelet[4097]: W0517 00:14:18.120249 4097 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:14:18.120742 kubelet[4097]: E0517 00:14:18.120724 4097 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-02409cc2a5\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" May 17 00:14:18.140206 kubelet[4097]: I0517 00:14:18.140165 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-02409cc2a5" podStartSLOduration=2.140149886 podStartE2EDuration="2.140149886s" podCreationTimestamp="2025-05-17 00:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:18.140099486 +0000 UTC m=+1.101038641" watchObservedRunningTime="2025-05-17 00:14:18.140149886 +0000 UTC m=+1.101089041" May 17 00:14:18.140286 kubelet[4097]: I0517 00:14:18.140263 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-02409cc2a5" podStartSLOduration=2.140258926 podStartE2EDuration="2.140258926s" podCreationTimestamp="2025-05-17 00:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:18.134050046 +0000 UTC m=+1.094989161" watchObservedRunningTime="2025-05-17 00:14:18.140258926 +0000 UTC m=+1.101198041" May 17 00:14:18.151060 kubelet[4097]: I0517 00:14:18.151022 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-02409cc2a5" podStartSLOduration=1.1510100460000001 podStartE2EDuration="1.151010046s" podCreationTimestamp="2025-05-17 00:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:18.145411726 +0000 UTC m=+1.106350881" watchObservedRunningTime="2025-05-17 00:14:18.151010046 +0000 UTC m=+1.111949201" May 17 00:14:23.086855 kubelet[4097]: I0517 00:14:23.086822 4097 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:14:23.087228 containerd[2643]: time="2025-05-17T00:14:23.087075886Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:14:23.087386 kubelet[4097]: I0517 00:14:23.087252 4097 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:14:23.905000 systemd[1]: Created slice kubepods-besteffort-podc94d3b91_b036_4a00_a5fc_cbbd5bc77082.slice - libcontainer container kubepods-besteffort-podc94d3b91_b036_4a00_a5fc_cbbd5bc77082.slice. May 17 00:14:23.935870 kubelet[4097]: I0517 00:14:23.935830 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c94d3b91-b036-4a00-a5fc-cbbd5bc77082-xtables-lock\") pod \"kube-proxy-4bhrv\" (UID: \"c94d3b91-b036-4a00-a5fc-cbbd5bc77082\") " pod="kube-system/kube-proxy-4bhrv" May 17 00:14:23.935870 kubelet[4097]: I0517 00:14:23.935864 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c94d3b91-b036-4a00-a5fc-cbbd5bc77082-lib-modules\") pod \"kube-proxy-4bhrv\" (UID: \"c94d3b91-b036-4a00-a5fc-cbbd5bc77082\") " pod="kube-system/kube-proxy-4bhrv" May 17 00:14:23.935870 kubelet[4097]: I0517 00:14:23.935882 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95fq2\" (UniqueName: \"kubernetes.io/projected/c94d3b91-b036-4a00-a5fc-cbbd5bc77082-kube-api-access-95fq2\") pod \"kube-proxy-4bhrv\" (UID: \"c94d3b91-b036-4a00-a5fc-cbbd5bc77082\") " pod="kube-system/kube-proxy-4bhrv" May 17 00:14:23.936083 kubelet[4097]: I0517 00:14:23.935906 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c94d3b91-b036-4a00-a5fc-cbbd5bc77082-kube-proxy\") pod \"kube-proxy-4bhrv\" (UID: \"c94d3b91-b036-4a00-a5fc-cbbd5bc77082\") " pod="kube-system/kube-proxy-4bhrv" May 17 00:14:24.113270 systemd[1]: Created slice kubepods-besteffort-podde75e770_f557_4d53_b6e6_8bacd837d0c9.slice - libcontainer container kubepods-besteffort-podde75e770_f557_4d53_b6e6_8bacd837d0c9.slice. May 17 00:14:24.137483 kubelet[4097]: I0517 00:14:24.137450 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjfp\" (UniqueName: \"kubernetes.io/projected/de75e770-f557-4d53-b6e6-8bacd837d0c9-kube-api-access-8bjfp\") pod \"tigera-operator-844669ff44-xzcqn\" (UID: \"de75e770-f557-4d53-b6e6-8bacd837d0c9\") " pod="tigera-operator/tigera-operator-844669ff44-xzcqn" May 17 00:14:24.137483 kubelet[4097]: I0517 00:14:24.137484 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de75e770-f557-4d53-b6e6-8bacd837d0c9-var-lib-calico\") pod \"tigera-operator-844669ff44-xzcqn\" (UID: \"de75e770-f557-4d53-b6e6-8bacd837d0c9\") " pod="tigera-operator/tigera-operator-844669ff44-xzcqn" May 17 00:14:24.225071 containerd[2643]: time="2025-05-17T00:14:24.225005966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bhrv,Uid:c94d3b91-b036-4a00-a5fc-cbbd5bc77082,Namespace:kube-system,Attempt:0,}" May 17 00:14:24.237574 containerd[2643]: time="2025-05-17T00:14:24.237520326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:24.237615 containerd[2643]: time="2025-05-17T00:14:24.237572926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:24.237615 containerd[2643]: time="2025-05-17T00:14:24.237585486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:24.237673 containerd[2643]: time="2025-05-17T00:14:24.237658286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:24.258084 systemd[1]: Started cri-containerd-22b86a4360c859a76ce16d80b926c540f8c5657a5b5d7c153a148ba9c5291309.scope - libcontainer container 22b86a4360c859a76ce16d80b926c540f8c5657a5b5d7c153a148ba9c5291309. May 17 00:14:24.273277 containerd[2643]: time="2025-05-17T00:14:24.273245806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bhrv,Uid:c94d3b91-b036-4a00-a5fc-cbbd5bc77082,Namespace:kube-system,Attempt:0,} returns sandbox id \"22b86a4360c859a76ce16d80b926c540f8c5657a5b5d7c153a148ba9c5291309\"" May 17 00:14:24.275208 containerd[2643]: time="2025-05-17T00:14:24.275186726Z" level=info msg="CreateContainer within sandbox \"22b86a4360c859a76ce16d80b926c540f8c5657a5b5d7c153a148ba9c5291309\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:14:24.283316 containerd[2643]: time="2025-05-17T00:14:24.283277806Z" level=info msg="CreateContainer within sandbox \"22b86a4360c859a76ce16d80b926c540f8c5657a5b5d7c153a148ba9c5291309\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7de0388e7f5204aa164ffcff1c9aa5a8bd9170c82c4f4ddc2b072c11ed825d3\"" May 17 00:14:24.283753 containerd[2643]: time="2025-05-17T00:14:24.283725246Z" level=info msg="StartContainer for \"f7de0388e7f5204aa164ffcff1c9aa5a8bd9170c82c4f4ddc2b072c11ed825d3\"" May 17 00:14:24.319070 systemd[1]: Started cri-containerd-f7de0388e7f5204aa164ffcff1c9aa5a8bd9170c82c4f4ddc2b072c11ed825d3.scope - libcontainer container f7de0388e7f5204aa164ffcff1c9aa5a8bd9170c82c4f4ddc2b072c11ed825d3. May 17 00:14:24.338020 containerd[2643]: time="2025-05-17T00:14:24.337990926Z" level=info msg="StartContainer for \"f7de0388e7f5204aa164ffcff1c9aa5a8bd9170c82c4f4ddc2b072c11ed825d3\" returns successfully" May 17 00:14:24.415938 containerd[2643]: time="2025-05-17T00:14:24.415908486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-xzcqn,Uid:de75e770-f557-4d53-b6e6-8bacd837d0c9,Namespace:tigera-operator,Attempt:0,}" May 17 00:14:24.428860 containerd[2643]: time="2025-05-17T00:14:24.428504486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:24.428905 containerd[2643]: time="2025-05-17T00:14:24.428854846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:24.428905 containerd[2643]: time="2025-05-17T00:14:24.428869046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:24.429027 containerd[2643]: time="2025-05-17T00:14:24.429002126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:24.450010 systemd[1]: Started cri-containerd-5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7.scope - libcontainer container 5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7. May 17 00:14:24.472476 containerd[2643]: time="2025-05-17T00:14:24.472450046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-xzcqn,Uid:de75e770-f557-4d53-b6e6-8bacd837d0c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7\"" May 17 00:14:24.473585 containerd[2643]: time="2025-05-17T00:14:24.473568286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:14:25.865089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615244403.mount: Deactivated successfully. May 17 00:14:26.475197 kubelet[4097]: I0517 00:14:26.475152 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bhrv" podStartSLOduration=3.475124846 podStartE2EDuration="3.475124846s" podCreationTimestamp="2025-05-17 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:25.138387286 +0000 UTC m=+8.099326441" watchObservedRunningTime="2025-05-17 00:14:26.475124846 +0000 UTC m=+9.436064001" May 17 00:14:27.699516 containerd[2643]: time="2025-05-17T00:14:27.699475246Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:27.699812 containerd[2643]: time="2025-05-17T00:14:27.699490246Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 17 00:14:27.700338 containerd[2643]: time="2025-05-17T00:14:27.700318926Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:27.702290 containerd[2643]: time="2025-05-17T00:14:27.702272606Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:27.703089 containerd[2643]: time="2025-05-17T00:14:27.703070686Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 3.229477s" May 17 00:14:27.703114 containerd[2643]: time="2025-05-17T00:14:27.703097366Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 00:14:27.705253 containerd[2643]: time="2025-05-17T00:14:27.705230846Z" level=info msg="CreateContainer within sandbox \"5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:14:27.709928 containerd[2643]: time="2025-05-17T00:14:27.709901166Z" level=info msg="CreateContainer within sandbox \"5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da\"" May 17 00:14:27.710266 containerd[2643]: time="2025-05-17T00:14:27.710246086Z" level=info msg="StartContainer for \"8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da\"" May 17 00:14:27.742020 systemd[1]: Started cri-containerd-8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da.scope - libcontainer container 8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da. May 17 00:14:27.758537 containerd[2643]: time="2025-05-17T00:14:27.758345646Z" level=info msg="StartContainer for \"8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da\" returns successfully" May 17 00:14:28.137441 kubelet[4097]: I0517 00:14:28.137395 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-xzcqn" podStartSLOduration=0.906783142 podStartE2EDuration="4.137380382s" podCreationTimestamp="2025-05-17 00:14:24 +0000 UTC" firstStartedPulling="2025-05-17 00:14:24.473253646 +0000 UTC m=+7.434192801" lastFinishedPulling="2025-05-17 00:14:27.703850886 +0000 UTC m=+10.664790041" observedRunningTime="2025-05-17 00:14:28.137255822 +0000 UTC m=+11.098194977" watchObservedRunningTime="2025-05-17 00:14:28.137380382 +0000 UTC m=+11.098319537" May 17 00:14:29.288589 update_engine[2638]: I20250517 00:14:29.288213 2638 update_attempter.cc:509] Updating boot flags... May 17 00:14:29.319914 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4607) May 17 00:14:29.349909 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4610) May 17 00:14:29.369909 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4610) May 17 00:14:29.429789 systemd[1]: cri-containerd-8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da.scope: Deactivated successfully. May 17 00:14:29.443523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da-rootfs.mount: Deactivated successfully. May 17 00:14:29.499823 containerd[2643]: time="2025-05-17T00:14:29.499760220Z" level=info msg="shim disconnected" id=8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da namespace=k8s.io May 17 00:14:29.499823 containerd[2643]: time="2025-05-17T00:14:29.499815220Z" level=warning msg="cleaning up after shim disconnected" id=8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da namespace=k8s.io May 17 00:14:29.499823 containerd[2643]: time="2025-05-17T00:14:29.499823860Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:30.134029 kubelet[4097]: I0517 00:14:30.133984 4097 scope.go:117] "RemoveContainer" containerID="8d91b39955c00b3ff3eea8166e8d93daabbb0fc25a0dbdde0d7485f4f9c442da" May 17 00:14:30.135259 containerd[2643]: time="2025-05-17T00:14:30.135230010Z" level=info msg="CreateContainer within sandbox \"5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 00:14:30.139662 containerd[2643]: time="2025-05-17T00:14:30.139593251Z" level=info msg="CreateContainer within sandbox \"5668b49639ec4ca7ac64f02bd6df123e9940f48c77f428abc247b6db153aa9b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"96e9d95c4eee97b0dafdade536bc44ff574c857cc388a6e3ed762f1e3d98ca54\"" May 17 00:14:30.140364 containerd[2643]: time="2025-05-17T00:14:30.139936251Z" level=info msg="StartContainer for \"96e9d95c4eee97b0dafdade536bc44ff574c857cc388a6e3ed762f1e3d98ca54\"" May 17 00:14:30.173112 systemd[1]: Started cri-containerd-96e9d95c4eee97b0dafdade536bc44ff574c857cc388a6e3ed762f1e3d98ca54.scope - libcontainer container 96e9d95c4eee97b0dafdade536bc44ff574c857cc388a6e3ed762f1e3d98ca54. May 17 00:14:30.189548 containerd[2643]: time="2025-05-17T00:14:30.189513776Z" level=info msg="StartContainer for \"96e9d95c4eee97b0dafdade536bc44ff574c857cc388a6e3ed762f1e3d98ca54\" returns successfully" May 17 00:14:32.642116 sudo[2892]: pam_unix(sudo:session): session closed for user root May 17 00:14:32.705815 sshd[2889]: pam_unix(sshd:session): session closed for user core May 17 00:14:32.709541 systemd[1]: sshd@6-147.28.129.25:22-147.75.109.163:58310.service: Deactivated successfully. May 17 00:14:32.711404 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:14:32.712179 systemd[1]: session-9.scope: Consumed 7.244s CPU time, 172.2M memory peak, 0B memory swap peak. May 17 00:14:32.712565 systemd-logind[2631]: Session 9 logged out. Waiting for processes to exit. May 17 00:14:32.713207 systemd-logind[2631]: Removed session 9. May 17 00:14:38.871783 systemd[1]: Created slice kubepods-besteffort-pod096cbbc3_f9ec_40ae_8c79_9052f7d66a61.slice - libcontainer container kubepods-besteffort-pod096cbbc3_f9ec_40ae_8c79_9052f7d66a61.slice. May 17 00:14:38.924155 kubelet[4097]: I0517 00:14:38.924121 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/096cbbc3-f9ec-40ae-8c79-9052f7d66a61-typha-certs\") pod \"calico-typha-bddb49b7b-tr7q2\" (UID: \"096cbbc3-f9ec-40ae-8c79-9052f7d66a61\") " pod="calico-system/calico-typha-bddb49b7b-tr7q2" May 17 00:14:38.924155 kubelet[4097]: I0517 00:14:38.924156 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096cbbc3-f9ec-40ae-8c79-9052f7d66a61-tigera-ca-bundle\") pod \"calico-typha-bddb49b7b-tr7q2\" (UID: \"096cbbc3-f9ec-40ae-8c79-9052f7d66a61\") " pod="calico-system/calico-typha-bddb49b7b-tr7q2" May 17 00:14:38.924488 kubelet[4097]: I0517 00:14:38.924176 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85trb\" (UniqueName: \"kubernetes.io/projected/096cbbc3-f9ec-40ae-8c79-9052f7d66a61-kube-api-access-85trb\") pod \"calico-typha-bddb49b7b-tr7q2\" (UID: \"096cbbc3-f9ec-40ae-8c79-9052f7d66a61\") " pod="calico-system/calico-typha-bddb49b7b-tr7q2" May 17 00:14:39.174734 containerd[2643]: time="2025-05-17T00:14:39.174627225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bddb49b7b-tr7q2,Uid:096cbbc3-f9ec-40ae-8c79-9052f7d66a61,Namespace:calico-system,Attempt:0,}" May 17 00:14:39.181784 systemd[1]: Created slice kubepods-besteffort-pod12619c8a_9b68_439b_993f_4e7fac9d75c3.slice - libcontainer container kubepods-besteffort-pod12619c8a_9b68_439b_993f_4e7fac9d75c3.slice. May 17 00:14:39.187833 containerd[2643]: time="2025-05-17T00:14:39.187778946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:39.187872 containerd[2643]: time="2025-05-17T00:14:39.187830226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:39.187872 containerd[2643]: time="2025-05-17T00:14:39.187842186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:39.187938 containerd[2643]: time="2025-05-17T00:14:39.187921426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:39.215034 systemd[1]: Started cri-containerd-44e380e574055f87b4636c8a3ade65327c3d678877e97ab545018cdfbc038284.scope - libcontainer container 44e380e574055f87b4636c8a3ade65327c3d678877e97ab545018cdfbc038284. May 17 00:14:39.226005 kubelet[4097]: I0517 00:14:39.225970 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-cni-bin-dir\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226090 kubelet[4097]: I0517 00:14:39.226026 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-flexvol-driver-host\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226090 kubelet[4097]: I0517 00:14:39.226059 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/12619c8a-9b68-439b-993f-4e7fac9d75c3-node-certs\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226090 kubelet[4097]: I0517 00:14:39.226087 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-policysync\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226219 kubelet[4097]: I0517 00:14:39.226103 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12619c8a-9b68-439b-993f-4e7fac9d75c3-tigera-ca-bundle\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226219 kubelet[4097]: I0517 00:14:39.226119 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-lib-modules\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226219 kubelet[4097]: I0517 00:14:39.226133 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-var-lib-calico\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226219 kubelet[4097]: I0517 00:14:39.226148 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88xr\" (UniqueName: \"kubernetes.io/projected/12619c8a-9b68-439b-993f-4e7fac9d75c3-kube-api-access-b88xr\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226219 kubelet[4097]: I0517 00:14:39.226164 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-cni-log-dir\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226359 kubelet[4097]: I0517 00:14:39.226178 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-cni-net-dir\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226359 kubelet[4097]: I0517 00:14:39.226195 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-var-run-calico\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.226359 kubelet[4097]: I0517 00:14:39.226211 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12619c8a-9b68-439b-993f-4e7fac9d75c3-xtables-lock\") pod \"calico-node-dbmdq\" (UID: \"12619c8a-9b68-439b-993f-4e7fac9d75c3\") " pod="calico-system/calico-node-dbmdq" May 17 00:14:39.237890 containerd[2643]: time="2025-05-17T00:14:39.237856549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bddb49b7b-tr7q2,Uid:096cbbc3-f9ec-40ae-8c79-9052f7d66a61,Namespace:calico-system,Attempt:0,} returns sandbox id \"44e380e574055f87b4636c8a3ade65327c3d678877e97ab545018cdfbc038284\"" May 17 00:14:39.238933 containerd[2643]: time="2025-05-17T00:14:39.238912429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:14:39.328308 kubelet[4097]: E0517 00:14:39.328286 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.328308 kubelet[4097]: W0517 00:14:39.328306 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.328369 kubelet[4097]: E0517 00:14:39.328325 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.329580 kubelet[4097]: E0517 00:14:39.329563 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.329580 kubelet[4097]: W0517 00:14:39.329577 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.329625 kubelet[4097]: E0517 00:14:39.329592 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.335887 kubelet[4097]: E0517 00:14:39.335875 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.335929 kubelet[4097]: W0517 00:14:39.335888 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.335929 kubelet[4097]: E0517 00:14:39.335904 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.459716 kubelet[4097]: E0517 00:14:39.459641 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2ztl" podUID="67172858-15b2-4ceb-9630-af18b81413de" May 17 00:14:39.484398 containerd[2643]: time="2025-05-17T00:14:39.484363483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbmdq,Uid:12619c8a-9b68-439b-993f-4e7fac9d75c3,Namespace:calico-system,Attempt:0,}" May 17 00:14:39.496659 containerd[2643]: time="2025-05-17T00:14:39.496598084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:39.496659 containerd[2643]: time="2025-05-17T00:14:39.496647884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:39.496706 containerd[2643]: time="2025-05-17T00:14:39.496660204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:39.496750 containerd[2643]: time="2025-05-17T00:14:39.496734924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:39.522091 systemd[1]: Started cri-containerd-2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2.scope - libcontainer container 2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2. May 17 00:14:39.525852 kubelet[4097]: E0517 00:14:39.525832 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.525852 kubelet[4097]: W0517 00:14:39.525849 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.525916 kubelet[4097]: E0517 00:14:39.525867 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.526075 kubelet[4097]: E0517 00:14:39.526064 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.526114 kubelet[4097]: W0517 00:14:39.526072 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.526139 kubelet[4097]: E0517 00:14:39.526113 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.526319 kubelet[4097]: E0517 00:14:39.526308 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.526319 kubelet[4097]: W0517 00:14:39.526316 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.526359 kubelet[4097]: E0517 00:14:39.526324 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.526535 kubelet[4097]: E0517 00:14:39.526525 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.526535 kubelet[4097]: W0517 00:14:39.526532 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.526580 kubelet[4097]: E0517 00:14:39.526539 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.526757 kubelet[4097]: E0517 00:14:39.526746 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.526757 kubelet[4097]: W0517 00:14:39.526755 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.526799 kubelet[4097]: E0517 00:14:39.526763 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.526953 kubelet[4097]: E0517 00:14:39.526942 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.526953 kubelet[4097]: W0517 00:14:39.526950 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.526999 kubelet[4097]: E0517 00:14:39.526957 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.527169 kubelet[4097]: E0517 00:14:39.527160 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.527169 kubelet[4097]: W0517 00:14:39.527167 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.527212 kubelet[4097]: E0517 00:14:39.527174 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.527372 kubelet[4097]: E0517 00:14:39.527362 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.527372 kubelet[4097]: W0517 00:14:39.527369 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.527417 kubelet[4097]: E0517 00:14:39.527376 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.527586 kubelet[4097]: E0517 00:14:39.527575 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.527586 kubelet[4097]: W0517 00:14:39.527583 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.527629 kubelet[4097]: E0517 00:14:39.527592 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.527781 kubelet[4097]: E0517 00:14:39.527770 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.527781 kubelet[4097]: W0517 00:14:39.527777 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.527823 kubelet[4097]: E0517 00:14:39.527783 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.527985 kubelet[4097]: E0517 00:14:39.527977 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.528005 kubelet[4097]: W0517 00:14:39.527984 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.528005 kubelet[4097]: E0517 00:14:39.527991 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.528198 kubelet[4097]: E0517 00:14:39.528191 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.528218 kubelet[4097]: W0517 00:14:39.528198 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.528218 kubelet[4097]: E0517 00:14:39.528204 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.528432 kubelet[4097]: E0517 00:14:39.528425 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.528453 kubelet[4097]: W0517 00:14:39.528432 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.528453 kubelet[4097]: E0517 00:14:39.528439 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.528648 kubelet[4097]: E0517 00:14:39.528638 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.528648 kubelet[4097]: W0517 00:14:39.528645 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.528691 kubelet[4097]: E0517 00:14:39.528652 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.528864 kubelet[4097]: E0517 00:14:39.528853 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.528864 kubelet[4097]: W0517 00:14:39.528861 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.528908 kubelet[4097]: E0517 00:14:39.528868 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.529069 kubelet[4097]: E0517 00:14:39.529059 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.529069 kubelet[4097]: W0517 00:14:39.529066 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.529115 kubelet[4097]: E0517 00:14:39.529073 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.529291 kubelet[4097]: E0517 00:14:39.529268 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.529291 kubelet[4097]: W0517 00:14:39.529277 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.529291 kubelet[4097]: E0517 00:14:39.529284 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.529480 kubelet[4097]: E0517 00:14:39.529470 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.529480 kubelet[4097]: W0517 00:14:39.529477 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.529521 kubelet[4097]: E0517 00:14:39.529484 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.529685 kubelet[4097]: E0517 00:14:39.529678 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.529706 kubelet[4097]: W0517 00:14:39.529685 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.529706 kubelet[4097]: E0517 00:14:39.529691 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.530368 kubelet[4097]: E0517 00:14:39.530359 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.530368 kubelet[4097]: W0517 00:14:39.530367 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.530512 kubelet[4097]: E0517 00:14:39.530375 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.530623 kubelet[4097]: E0517 00:14:39.530607 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.530623 kubelet[4097]: W0517 00:14:39.530616 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.530623 kubelet[4097]: E0517 00:14:39.530624 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.530723 kubelet[4097]: I0517 00:14:39.530645 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67172858-15b2-4ceb-9630-af18b81413de-kubelet-dir\") pod \"csi-node-driver-b2ztl\" (UID: \"67172858-15b2-4ceb-9630-af18b81413de\") " pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:39.530857 kubelet[4097]: E0517 00:14:39.530844 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.530857 kubelet[4097]: W0517 00:14:39.530855 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.530901 kubelet[4097]: E0517 00:14:39.530865 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.530901 kubelet[4097]: I0517 00:14:39.530879 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67172858-15b2-4ceb-9630-af18b81413de-varrun\") pod \"csi-node-driver-b2ztl\" (UID: \"67172858-15b2-4ceb-9630-af18b81413de\") " pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:39.531117 kubelet[4097]: E0517 00:14:39.531100 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.531143 kubelet[4097]: W0517 00:14:39.531115 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.531143 kubelet[4097]: E0517 00:14:39.531133 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.531298 kubelet[4097]: E0517 00:14:39.531287 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.531298 kubelet[4097]: W0517 00:14:39.531296 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.531336 kubelet[4097]: E0517 00:14:39.531307 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.531474 kubelet[4097]: E0517 00:14:39.531464 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.531474 kubelet[4097]: W0517 00:14:39.531472 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.531519 kubelet[4097]: E0517 00:14:39.531482 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.531519 kubelet[4097]: I0517 00:14:39.531501 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67172858-15b2-4ceb-9630-af18b81413de-registration-dir\") pod \"csi-node-driver-b2ztl\" (UID: \"67172858-15b2-4ceb-9630-af18b81413de\") " pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:39.531654 kubelet[4097]: E0517 00:14:39.531644 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.531674 kubelet[4097]: W0517 00:14:39.531654 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.531674 kubelet[4097]: E0517 00:14:39.531664 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.531713 kubelet[4097]: I0517 00:14:39.531679 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ldn\" (UniqueName: \"kubernetes.io/projected/67172858-15b2-4ceb-9630-af18b81413de-kube-api-access-b9ldn\") pod \"csi-node-driver-b2ztl\" (UID: \"67172858-15b2-4ceb-9630-af18b81413de\") " pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:39.531863 kubelet[4097]: E0517 00:14:39.531845 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.531888 kubelet[4097]: W0517 00:14:39.531862 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.531888 kubelet[4097]: E0517 00:14:39.531878 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.532034 kubelet[4097]: E0517 00:14:39.532026 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.532058 kubelet[4097]: W0517 00:14:39.532034 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.532058 kubelet[4097]: E0517 00:14:39.532045 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.532221 kubelet[4097]: E0517 00:14:39.532213 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.532243 kubelet[4097]: W0517 00:14:39.532221 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.532243 kubelet[4097]: E0517 00:14:39.532231 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.532405 kubelet[4097]: E0517 00:14:39.532398 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.532425 kubelet[4097]: W0517 00:14:39.532405 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.532425 kubelet[4097]: E0517 00:14:39.532415 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.532563 kubelet[4097]: E0517 00:14:39.532556 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.532583 kubelet[4097]: W0517 00:14:39.532563 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.532583 kubelet[4097]: E0517 00:14:39.532571 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.532621 kubelet[4097]: I0517 00:14:39.532589 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67172858-15b2-4ceb-9630-af18b81413de-socket-dir\") pod \"csi-node-driver-b2ztl\" (UID: \"67172858-15b2-4ceb-9630-af18b81413de\") " pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:39.532767 kubelet[4097]: E0517 00:14:39.532756 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.532788 kubelet[4097]: W0517 00:14:39.532767 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.532788 kubelet[4097]: E0517 00:14:39.532778 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.532930 kubelet[4097]: E0517 00:14:39.532922 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.532951 kubelet[4097]: W0517 00:14:39.532930 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.532951 kubelet[4097]: E0517 00:14:39.532940 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.533168 kubelet[4097]: E0517 00:14:39.533157 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.533193 kubelet[4097]: W0517 00:14:39.533168 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.533193 kubelet[4097]: E0517 00:14:39.533177 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.533319 kubelet[4097]: E0517 00:14:39.533311 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.533342 kubelet[4097]: W0517 00:14:39.533319 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.533342 kubelet[4097]: E0517 00:14:39.533327 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.537449 containerd[2643]: time="2025-05-17T00:14:39.537412046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbmdq,Uid:12619c8a-9b68-439b-993f-4e7fac9d75c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\"" May 17 00:14:39.633130 kubelet[4097]: E0517 00:14:39.633105 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.633130 kubelet[4097]: W0517 00:14:39.633121 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.633248 kubelet[4097]: E0517 00:14:39.633135 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.633423 kubelet[4097]: E0517 00:14:39.633411 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.633423 kubelet[4097]: W0517 00:14:39.633420 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.633469 kubelet[4097]: E0517 00:14:39.633432 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.633674 kubelet[4097]: E0517 00:14:39.633662 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.633674 kubelet[4097]: W0517 00:14:39.633671 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.633715 kubelet[4097]: E0517 00:14:39.633681 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.633910 kubelet[4097]: E0517 00:14:39.633898 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.633910 kubelet[4097]: W0517 00:14:39.633907 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.633954 kubelet[4097]: E0517 00:14:39.633918 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.634091 kubelet[4097]: E0517 00:14:39.634079 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.634091 kubelet[4097]: W0517 00:14:39.634088 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.634129 kubelet[4097]: E0517 00:14:39.634098 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.634333 kubelet[4097]: E0517 00:14:39.634321 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.634333 kubelet[4097]: W0517 00:14:39.634330 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.634374 kubelet[4097]: E0517 00:14:39.634341 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.634495 kubelet[4097]: E0517 00:14:39.634485 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.634495 kubelet[4097]: W0517 00:14:39.634492 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.634538 kubelet[4097]: E0517 00:14:39.634513 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.634640 kubelet[4097]: E0517 00:14:39.634632 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.634662 kubelet[4097]: W0517 00:14:39.634639 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.634662 kubelet[4097]: E0517 00:14:39.634658 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.634809 kubelet[4097]: E0517 00:14:39.634801 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.634832 kubelet[4097]: W0517 00:14:39.634809 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.634853 kubelet[4097]: E0517 00:14:39.634829 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.635037 kubelet[4097]: E0517 00:14:39.635029 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.635058 kubelet[4097]: W0517 00:14:39.635037 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.635058 kubelet[4097]: E0517 00:14:39.635053 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.635177 kubelet[4097]: E0517 00:14:39.635170 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.635198 kubelet[4097]: W0517 00:14:39.635177 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.635198 kubelet[4097]: E0517 00:14:39.635191 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.635310 kubelet[4097]: E0517 00:14:39.635303 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.635330 kubelet[4097]: W0517 00:14:39.635311 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.635330 kubelet[4097]: E0517 00:14:39.635321 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.635559 kubelet[4097]: E0517 00:14:39.635548 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.635582 kubelet[4097]: W0517 00:14:39.635560 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.635582 kubelet[4097]: E0517 00:14:39.635573 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.635722 kubelet[4097]: E0517 00:14:39.635712 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.635722 kubelet[4097]: W0517 00:14:39.635719 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.635768 kubelet[4097]: E0517 00:14:39.635729 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.635998 kubelet[4097]: E0517 00:14:39.635987 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.635998 kubelet[4097]: W0517 00:14:39.635996 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.636036 kubelet[4097]: E0517 00:14:39.636006 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.636185 kubelet[4097]: E0517 00:14:39.636174 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.636185 kubelet[4097]: W0517 00:14:39.636182 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.636231 kubelet[4097]: E0517 00:14:39.636193 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.636403 kubelet[4097]: E0517 00:14:39.636392 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.636403 kubelet[4097]: W0517 00:14:39.636400 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.636441 kubelet[4097]: E0517 00:14:39.636420 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.636585 kubelet[4097]: E0517 00:14:39.636578 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.636608 kubelet[4097]: W0517 00:14:39.636585 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.636608 kubelet[4097]: E0517 00:14:39.636603 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.636736 kubelet[4097]: E0517 00:14:39.636727 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.636759 kubelet[4097]: W0517 00:14:39.636737 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.636779 kubelet[4097]: E0517 00:14:39.636758 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.636970 kubelet[4097]: E0517 00:14:39.636958 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.636970 kubelet[4097]: W0517 00:14:39.636967 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.637014 kubelet[4097]: E0517 00:14:39.636983 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.637141 kubelet[4097]: E0517 00:14:39.637131 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.637141 kubelet[4097]: W0517 00:14:39.637139 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.637182 kubelet[4097]: E0517 00:14:39.637149 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.637353 kubelet[4097]: E0517 00:14:39.637342 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.637353 kubelet[4097]: W0517 00:14:39.637350 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.637395 kubelet[4097]: E0517 00:14:39.637361 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.637584 kubelet[4097]: E0517 00:14:39.637576 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.637604 kubelet[4097]: W0517 00:14:39.637584 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.637604 kubelet[4097]: E0517 00:14:39.637595 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.637778 kubelet[4097]: E0517 00:14:39.637763 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.637778 kubelet[4097]: W0517 00:14:39.637776 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.637821 kubelet[4097]: E0517 00:14:39.637790 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.638067 kubelet[4097]: E0517 00:14:39.638056 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.638067 kubelet[4097]: W0517 00:14:39.638064 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.638105 kubelet[4097]: E0517 00:14:39.638072 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.647374 kubelet[4097]: E0517 00:14:39.647359 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:39.647374 kubelet[4097]: W0517 00:14:39.647372 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:39.647418 kubelet[4097]: E0517 00:14:39.647383 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:39.838749 containerd[2643]: time="2025-05-17T00:14:39.838703184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:39.838749 containerd[2643]: time="2025-05-17T00:14:39.838739784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 17 00:14:39.839512 containerd[2643]: time="2025-05-17T00:14:39.839489984Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:39.841317 containerd[2643]: time="2025-05-17T00:14:39.841295184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:39.842019 containerd[2643]: time="2025-05-17T00:14:39.841984344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 603.038555ms" May 17 00:14:39.842076 containerd[2643]: time="2025-05-17T00:14:39.842019184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 00:14:39.842712 containerd[2643]: time="2025-05-17T00:14:39.842692384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:14:39.847303 containerd[2643]: time="2025-05-17T00:14:39.847273384Z" level=info msg="CreateContainer within sandbox \"44e380e574055f87b4636c8a3ade65327c3d678877e97ab545018cdfbc038284\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:14:39.852449 containerd[2643]: time="2025-05-17T00:14:39.852419945Z" level=info msg="CreateContainer within sandbox \"44e380e574055f87b4636c8a3ade65327c3d678877e97ab545018cdfbc038284\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d0e253220ac3352404941dc2c53af0875fb4d0368ac6e7611b2bbea1516da71\"" May 17 00:14:39.852815 containerd[2643]: time="2025-05-17T00:14:39.852790505Z" level=info msg="StartContainer for \"1d0e253220ac3352404941dc2c53af0875fb4d0368ac6e7611b2bbea1516da71\"" May 17 00:14:39.880001 systemd[1]: Started cri-containerd-1d0e253220ac3352404941dc2c53af0875fb4d0368ac6e7611b2bbea1516da71.scope - libcontainer container 1d0e253220ac3352404941dc2c53af0875fb4d0368ac6e7611b2bbea1516da71. May 17 00:14:39.903533 containerd[2643]: time="2025-05-17T00:14:39.903502748Z" level=info msg="StartContainer for \"1d0e253220ac3352404941dc2c53af0875fb4d0368ac6e7611b2bbea1516da71\" returns successfully" May 17 00:14:40.156156 kubelet[4097]: I0517 00:14:40.156056 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bddb49b7b-tr7q2" podStartSLOduration=1.552108727 podStartE2EDuration="2.156041362s" podCreationTimestamp="2025-05-17 00:14:38 +0000 UTC" firstStartedPulling="2025-05-17 00:14:39.238664509 +0000 UTC m=+22.199603664" lastFinishedPulling="2025-05-17 00:14:39.842597144 +0000 UTC m=+22.803536299" observedRunningTime="2025-05-17 00:14:40.155750722 +0000 UTC m=+23.116689917" watchObservedRunningTime="2025-05-17 00:14:40.156041362 +0000 UTC m=+23.116980517" May 17 00:14:40.234408 kubelet[4097]: E0517 00:14:40.234382 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.234408 kubelet[4097]: W0517 00:14:40.234401 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.234510 kubelet[4097]: E0517 00:14:40.234419 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.234621 kubelet[4097]: E0517 00:14:40.234610 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.234621 kubelet[4097]: W0517 00:14:40.234617 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.234661 kubelet[4097]: E0517 00:14:40.234625 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.234870 kubelet[4097]: E0517 00:14:40.234862 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.234890 kubelet[4097]: W0517 00:14:40.234870 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.234890 kubelet[4097]: E0517 00:14:40.234877 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.235034 kubelet[4097]: E0517 00:14:40.235024 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.235034 kubelet[4097]: W0517 00:14:40.235031 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.235074 kubelet[4097]: E0517 00:14:40.235038 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.235310 kubelet[4097]: E0517 00:14:40.235300 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.235310 kubelet[4097]: W0517 00:14:40.235308 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.235352 kubelet[4097]: E0517 00:14:40.235315 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.235522 kubelet[4097]: E0517 00:14:40.235511 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.235522 kubelet[4097]: W0517 00:14:40.235518 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.235560 kubelet[4097]: E0517 00:14:40.235534 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.235786 kubelet[4097]: E0517 00:14:40.235776 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.235786 kubelet[4097]: W0517 00:14:40.235783 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.235829 kubelet[4097]: E0517 00:14:40.235790 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.236025 kubelet[4097]: E0517 00:14:40.236016 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.236025 kubelet[4097]: W0517 00:14:40.236023 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.236069 kubelet[4097]: E0517 00:14:40.236030 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.236205 kubelet[4097]: E0517 00:14:40.236197 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.236228 kubelet[4097]: W0517 00:14:40.236205 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.236228 kubelet[4097]: E0517 00:14:40.236213 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.236356 kubelet[4097]: E0517 00:14:40.236348 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.236376 kubelet[4097]: W0517 00:14:40.236356 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.236376 kubelet[4097]: E0517 00:14:40.236363 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.236562 kubelet[4097]: E0517 00:14:40.236554 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.236584 kubelet[4097]: W0517 00:14:40.236561 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.236584 kubelet[4097]: E0517 00:14:40.236568 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.236769 kubelet[4097]: E0517 00:14:40.236761 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.236789 kubelet[4097]: W0517 00:14:40.236768 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.236789 kubelet[4097]: E0517 00:14:40.236774 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.236972 kubelet[4097]: E0517 00:14:40.236961 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.236972 kubelet[4097]: W0517 00:14:40.236969 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.237025 kubelet[4097]: E0517 00:14:40.236976 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.237120 kubelet[4097]: E0517 00:14:40.237110 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.237120 kubelet[4097]: W0517 00:14:40.237117 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.237162 kubelet[4097]: E0517 00:14:40.237123 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.237263 kubelet[4097]: E0517 00:14:40.237254 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.237263 kubelet[4097]: W0517 00:14:40.237261 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.237305 kubelet[4097]: E0517 00:14:40.237267 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.238483 kubelet[4097]: E0517 00:14:40.238467 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.238505 kubelet[4097]: W0517 00:14:40.238482 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.238505 kubelet[4097]: E0517 00:14:40.238495 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.238760 kubelet[4097]: E0517 00:14:40.238749 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.238760 kubelet[4097]: W0517 00:14:40.238757 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.238802 kubelet[4097]: E0517 00:14:40.238769 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.239022 kubelet[4097]: E0517 00:14:40.239010 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.239022 kubelet[4097]: W0517 00:14:40.239019 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.239066 kubelet[4097]: E0517 00:14:40.239030 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.239308 kubelet[4097]: E0517 00:14:40.239291 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.239308 kubelet[4097]: W0517 00:14:40.239306 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.239354 kubelet[4097]: E0517 00:14:40.239320 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.239592 kubelet[4097]: E0517 00:14:40.239583 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.239613 kubelet[4097]: W0517 00:14:40.239591 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.239613 kubelet[4097]: E0517 00:14:40.239602 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.239863 kubelet[4097]: E0517 00:14:40.239855 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.239884 kubelet[4097]: W0517 00:14:40.239862 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.239884 kubelet[4097]: E0517 00:14:40.239873 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.240045 kubelet[4097]: E0517 00:14:40.240037 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.240066 kubelet[4097]: W0517 00:14:40.240045 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.240066 kubelet[4097]: E0517 00:14:40.240063 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.240196 kubelet[4097]: E0517 00:14:40.240188 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.240196 kubelet[4097]: W0517 00:14:40.240195 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.240240 kubelet[4097]: E0517 00:14:40.240219 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.240342 kubelet[4097]: E0517 00:14:40.240334 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.240363 kubelet[4097]: W0517 00:14:40.240342 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.240363 kubelet[4097]: E0517 00:14:40.240352 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.240561 kubelet[4097]: E0517 00:14:40.240554 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.240582 kubelet[4097]: W0517 00:14:40.240561 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.240582 kubelet[4097]: E0517 00:14:40.240571 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.240779 kubelet[4097]: E0517 00:14:40.240771 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.240802 kubelet[4097]: W0517 00:14:40.240778 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.240802 kubelet[4097]: E0517 00:14:40.240789 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.241053 kubelet[4097]: E0517 00:14:40.241038 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.241075 kubelet[4097]: W0517 00:14:40.241056 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.241100 kubelet[4097]: E0517 00:14:40.241074 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.241346 kubelet[4097]: E0517 00:14:40.241334 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.241366 kubelet[4097]: W0517 00:14:40.241346 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.241366 kubelet[4097]: E0517 00:14:40.241359 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.241576 kubelet[4097]: E0517 00:14:40.241567 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.241596 kubelet[4097]: W0517 00:14:40.241576 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.241596 kubelet[4097]: E0517 00:14:40.241587 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.241839 kubelet[4097]: E0517 00:14:40.241829 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.241859 kubelet[4097]: W0517 00:14:40.241839 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.241859 kubelet[4097]: E0517 00:14:40.241851 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.242165 kubelet[4097]: E0517 00:14:40.242153 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.242186 kubelet[4097]: W0517 00:14:40.242166 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.242186 kubelet[4097]: E0517 00:14:40.242181 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.242539 kubelet[4097]: E0517 00:14:40.242526 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.242559 kubelet[4097]: W0517 00:14:40.242539 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.242559 kubelet[4097]: E0517 00:14:40.242554 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.242747 kubelet[4097]: E0517 00:14:40.242739 4097 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:40.242767 kubelet[4097]: W0517 00:14:40.242747 4097 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:40.242767 kubelet[4097]: E0517 00:14:40.242757 4097 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:40.250477 containerd[2643]: time="2025-05-17T00:14:40.250443767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:40.250742 containerd[2643]: time="2025-05-17T00:14:40.250514887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 17 00:14:40.251285 containerd[2643]: time="2025-05-17T00:14:40.251266727Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:40.253036 containerd[2643]: time="2025-05-17T00:14:40.253014287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:40.253728 containerd[2643]: time="2025-05-17T00:14:40.253702847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 410.980983ms" May 17 00:14:40.253762 containerd[2643]: time="2025-05-17T00:14:40.253734527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 00:14:40.255203 containerd[2643]: time="2025-05-17T00:14:40.255183087Z" level=info msg="CreateContainer within sandbox \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:14:40.262376 containerd[2643]: time="2025-05-17T00:14:40.262341408Z" level=info msg="CreateContainer within sandbox \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c\"" May 17 00:14:40.262743 containerd[2643]: time="2025-05-17T00:14:40.262723808Z" level=info msg="StartContainer for \"643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c\"" May 17 00:14:40.305075 systemd[1]: Started cri-containerd-643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c.scope - libcontainer container 643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c. May 17 00:14:40.323796 containerd[2643]: time="2025-05-17T00:14:40.323760731Z" level=info msg="StartContainer for \"643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c\" returns successfully" May 17 00:14:40.335482 systemd[1]: cri-containerd-643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c.scope: Deactivated successfully. May 17 00:14:40.421420 containerd[2643]: time="2025-05-17T00:14:40.421317297Z" level=info msg="shim disconnected" id=643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c namespace=k8s.io May 17 00:14:40.421420 containerd[2643]: time="2025-05-17T00:14:40.421364897Z" level=warning msg="cleaning up after shim disconnected" id=643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c namespace=k8s.io May 17 00:14:40.421420 containerd[2643]: time="2025-05-17T00:14:40.421372617Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:41.028426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-643a02d5c19c9eda5174fb92f1f7e1b21f580137458e3698819f990136cd613c-rootfs.mount: Deactivated successfully. May 17 00:14:41.115258 kubelet[4097]: E0517 00:14:41.115213 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2ztl" podUID="67172858-15b2-4ceb-9630-af18b81413de" May 17 00:14:41.152059 containerd[2643]: time="2025-05-17T00:14:41.152024216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:14:42.129739 containerd[2643]: time="2025-05-17T00:14:42.129673586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:42.130112 containerd[2643]: time="2025-05-17T00:14:42.129703466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 17 00:14:42.130696 containerd[2643]: time="2025-05-17T00:14:42.130382306Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:42.132307 containerd[2643]: time="2025-05-17T00:14:42.132258506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:42.133002 containerd[2643]: time="2025-05-17T00:14:42.132974746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 980.91697ms" May 17 00:14:42.133065 containerd[2643]: time="2025-05-17T00:14:42.133005826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 00:14:42.134671 containerd[2643]: time="2025-05-17T00:14:42.134650346Z" level=info msg="CreateContainer within sandbox \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:14:42.141588 containerd[2643]: time="2025-05-17T00:14:42.141559867Z" level=info msg="CreateContainer within sandbox \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b\"" May 17 00:14:42.141909 containerd[2643]: time="2025-05-17T00:14:42.141874227Z" level=info msg="StartContainer for \"8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b\"" May 17 00:14:42.184015 systemd[1]: Started cri-containerd-8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b.scope - libcontainer container 8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b. May 17 00:14:42.202673 containerd[2643]: time="2025-05-17T00:14:42.202637670Z" level=info msg="StartContainer for \"8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b\" returns successfully" May 17 00:14:42.600040 containerd[2643]: time="2025-05-17T00:14:42.600000289Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:14:42.601856 systemd[1]: cri-containerd-8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b.scope: Deactivated successfully. May 17 00:14:42.685719 kubelet[4097]: I0517 00:14:42.685697 4097 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:14:42.703846 systemd[1]: Created slice kubepods-besteffort-podafa60deb_188d_4048_ba25_d7cae1d87a15.slice - libcontainer container kubepods-besteffort-podafa60deb_188d_4048_ba25_d7cae1d87a15.slice. May 17 00:14:42.707526 systemd[1]: Created slice kubepods-besteffort-pod10344652_ca68_479a_86f6_162b29976180.slice - libcontainer container kubepods-besteffort-pod10344652_ca68_479a_86f6_162b29976180.slice. May 17 00:14:42.711096 systemd[1]: Created slice kubepods-besteffort-pod36199e3b_e706_4256_a6af_e8f5ec5ff5b0.slice - libcontainer container kubepods-besteffort-pod36199e3b_e706_4256_a6af_e8f5ec5ff5b0.slice. May 17 00:14:42.714793 systemd[1]: Created slice kubepods-burstable-pod2e72750b_2053_4375_95c0_ca47f0bf61d4.slice - libcontainer container kubepods-burstable-pod2e72750b_2053_4375_95c0_ca47f0bf61d4.slice. May 17 00:14:42.718982 systemd[1]: Created slice kubepods-besteffort-pod701b4956_7c9e_46f9_9aea_31b0e2bd2c8b.slice - libcontainer container kubepods-besteffort-pod701b4956_7c9e_46f9_9aea_31b0e2bd2c8b.slice. May 17 00:14:42.722768 systemd[1]: Created slice kubepods-besteffort-poda41cd5df_5d9c_4907_bb35_9d4adffa8017.slice - libcontainer container kubepods-besteffort-poda41cd5df_5d9c_4907_bb35_9d4adffa8017.slice. May 17 00:14:42.726350 systemd[1]: Created slice kubepods-burstable-podc69e66b4_623c_48d0_af99_ebae9f6f8a0f.slice - libcontainer container kubepods-burstable-podc69e66b4_623c_48d0_af99_ebae9f6f8a0f.slice. May 17 00:14:42.736018 containerd[2643]: time="2025-05-17T00:14:42.735948735Z" level=info msg="shim disconnected" id=8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b namespace=k8s.io May 17 00:14:42.736018 containerd[2643]: time="2025-05-17T00:14:42.736013095Z" level=warning msg="cleaning up after shim disconnected" id=8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b namespace=k8s.io May 17 00:14:42.736123 containerd[2643]: time="2025-05-17T00:14:42.736027575Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:42.753240 kubelet[4097]: I0517 00:14:42.753202 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-backend-key-pair\") pod \"whisker-76b45cfbd-l29zd\" (UID: \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\") " pod="calico-system/whisker-76b45cfbd-l29zd" May 17 00:14:42.753325 kubelet[4097]: I0517 00:14:42.753247 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a41cd5df-5d9c-4907-bb35-9d4adffa8017-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-b5fcf\" (UID: \"a41cd5df-5d9c-4907-bb35-9d4adffa8017\") " pod="calico-system/goldmane-78d55f7ddc-b5fcf" May 17 00:14:42.753325 kubelet[4097]: I0517 00:14:42.753265 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10344652-ca68-479a-86f6-162b29976180-calico-apiserver-certs\") pod \"calico-apiserver-847d49c9d7-2j2bt\" (UID: \"10344652-ca68-479a-86f6-162b29976180\") " pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" May 17 00:14:42.753325 kubelet[4097]: I0517 00:14:42.753282 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jst\" (UniqueName: \"kubernetes.io/projected/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-kube-api-access-n7jst\") pod \"whisker-76b45cfbd-l29zd\" (UID: \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\") " pod="calico-system/whisker-76b45cfbd-l29zd" May 17 00:14:42.753457 kubelet[4097]: I0517 00:14:42.753401 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c69e66b4-623c-48d0-af99-ebae9f6f8a0f-config-volume\") pod \"coredns-668d6bf9bc-t2mhc\" (UID: \"c69e66b4-623c-48d0-af99-ebae9f6f8a0f\") " pod="kube-system/coredns-668d6bf9bc-t2mhc" May 17 00:14:42.753486 kubelet[4097]: I0517 00:14:42.753454 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7xn\" (UniqueName: \"kubernetes.io/projected/2e72750b-2053-4375-95c0-ca47f0bf61d4-kube-api-access-jg7xn\") pod \"coredns-668d6bf9bc-qstm5\" (UID: \"2e72750b-2053-4375-95c0-ca47f0bf61d4\") " pod="kube-system/coredns-668d6bf9bc-qstm5" May 17 00:14:42.753506 kubelet[4097]: I0517 00:14:42.753489 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhhq\" (UniqueName: \"kubernetes.io/projected/c69e66b4-623c-48d0-af99-ebae9f6f8a0f-kube-api-access-xzhhq\") pod \"coredns-668d6bf9bc-t2mhc\" (UID: \"c69e66b4-623c-48d0-af99-ebae9f6f8a0f\") " pod="kube-system/coredns-668d6bf9bc-t2mhc" May 17 00:14:42.753566 kubelet[4097]: I0517 00:14:42.753505 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e72750b-2053-4375-95c0-ca47f0bf61d4-config-volume\") pod \"coredns-668d6bf9bc-qstm5\" (UID: \"2e72750b-2053-4375-95c0-ca47f0bf61d4\") " pod="kube-system/coredns-668d6bf9bc-qstm5" May 17 00:14:42.753566 kubelet[4097]: I0517 00:14:42.753522 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41cd5df-5d9c-4907-bb35-9d4adffa8017-config\") pod \"goldmane-78d55f7ddc-b5fcf\" (UID: \"a41cd5df-5d9c-4907-bb35-9d4adffa8017\") " pod="calico-system/goldmane-78d55f7ddc-b5fcf" May 17 00:14:42.753566 kubelet[4097]: I0517 00:14:42.753538 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36199e3b-e706-4256-a6af-e8f5ec5ff5b0-calico-apiserver-certs\") pod \"calico-apiserver-847d49c9d7-8bsxh\" (UID: \"36199e3b-e706-4256-a6af-e8f5ec5ff5b0\") " pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" May 17 00:14:42.753566 kubelet[4097]: I0517 00:14:42.753556 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5msg\" (UniqueName: \"kubernetes.io/projected/10344652-ca68-479a-86f6-162b29976180-kube-api-access-v5msg\") pod \"calico-apiserver-847d49c9d7-2j2bt\" (UID: \"10344652-ca68-479a-86f6-162b29976180\") " pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" May 17 00:14:42.753649 kubelet[4097]: I0517 00:14:42.753571 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-ca-bundle\") pod \"whisker-76b45cfbd-l29zd\" (UID: \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\") " pod="calico-system/whisker-76b45cfbd-l29zd" May 17 00:14:42.753649 kubelet[4097]: I0517 00:14:42.753589 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a41cd5df-5d9c-4907-bb35-9d4adffa8017-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-b5fcf\" (UID: \"a41cd5df-5d9c-4907-bb35-9d4adffa8017\") " pod="calico-system/goldmane-78d55f7ddc-b5fcf" May 17 00:14:42.753649 kubelet[4097]: I0517 00:14:42.753607 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afa60deb-188d-4048-ba25-d7cae1d87a15-tigera-ca-bundle\") pod \"calico-kube-controllers-7c6666cd8d-5cpvh\" (UID: \"afa60deb-188d-4048-ba25-d7cae1d87a15\") " pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" May 17 00:14:42.753649 kubelet[4097]: I0517 00:14:42.753624 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpvs\" (UniqueName: \"kubernetes.io/projected/afa60deb-188d-4048-ba25-d7cae1d87a15-kube-api-access-dlpvs\") pod \"calico-kube-controllers-7c6666cd8d-5cpvh\" (UID: \"afa60deb-188d-4048-ba25-d7cae1d87a15\") " pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" May 17 00:14:42.753649 kubelet[4097]: I0517 00:14:42.753643 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brv4\" (UniqueName: \"kubernetes.io/projected/a41cd5df-5d9c-4907-bb35-9d4adffa8017-kube-api-access-4brv4\") pod \"goldmane-78d55f7ddc-b5fcf\" (UID: \"a41cd5df-5d9c-4907-bb35-9d4adffa8017\") " pod="calico-system/goldmane-78d55f7ddc-b5fcf" May 17 00:14:42.753754 kubelet[4097]: I0517 00:14:42.753659 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2h6\" (UniqueName: \"kubernetes.io/projected/36199e3b-e706-4256-a6af-e8f5ec5ff5b0-kube-api-access-jt2h6\") pod \"calico-apiserver-847d49c9d7-8bsxh\" (UID: \"36199e3b-e706-4256-a6af-e8f5ec5ff5b0\") " pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" May 17 00:14:43.006572 containerd[2643]: time="2025-05-17T00:14:43.006458868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6666cd8d-5cpvh,Uid:afa60deb-188d-4048-ba25-d7cae1d87a15,Namespace:calico-system,Attempt:0,}" May 17 00:14:43.009895 containerd[2643]: time="2025-05-17T00:14:43.009865829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-2j2bt,Uid:10344652-ca68-479a-86f6-162b29976180,Namespace:calico-apiserver,Attempt:0,}" May 17 00:14:43.013447 containerd[2643]: time="2025-05-17T00:14:43.013418389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-8bsxh,Uid:36199e3b-e706-4256-a6af-e8f5ec5ff5b0,Namespace:calico-apiserver,Attempt:0,}" May 17 00:14:43.017092 containerd[2643]: time="2025-05-17T00:14:43.017068349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qstm5,Uid:2e72750b-2053-4375-95c0-ca47f0bf61d4,Namespace:kube-system,Attempt:0,}" May 17 00:14:43.021690 containerd[2643]: time="2025-05-17T00:14:43.021659989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b45cfbd-l29zd,Uid:701b4956-7c9e-46f9-9aea-31b0e2bd2c8b,Namespace:calico-system,Attempt:0,}" May 17 00:14:43.025150 containerd[2643]: time="2025-05-17T00:14:43.025125989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-b5fcf,Uid:a41cd5df-5d9c-4907-bb35-9d4adffa8017,Namespace:calico-system,Attempt:0,}" May 17 00:14:43.028701 containerd[2643]: time="2025-05-17T00:14:43.028669389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2mhc,Uid:c69e66b4-623c-48d0-af99-ebae9f6f8a0f,Namespace:kube-system,Attempt:0,}" May 17 00:14:43.063467 containerd[2643]: time="2025-05-17T00:14:43.063415351Z" level=error msg="Failed to destroy network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.064307 containerd[2643]: time="2025-05-17T00:14:43.064280431Z" level=error msg="encountered an error cleaning up failed sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.064349 containerd[2643]: time="2025-05-17T00:14:43.064332311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6666cd8d-5cpvh,Uid:afa60deb-188d-4048-ba25-d7cae1d87a15,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.064565 kubelet[4097]: E0517 00:14:43.064522 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.064614 kubelet[4097]: E0517 00:14:43.064600 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" May 17 00:14:43.064643 kubelet[4097]: E0517 00:14:43.064620 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" May 17 00:14:43.064692 kubelet[4097]: E0517 00:14:43.064671 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c6666cd8d-5cpvh_calico-system(afa60deb-188d-4048-ba25-d7cae1d87a15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c6666cd8d-5cpvh_calico-system(afa60deb-188d-4048-ba25-d7cae1d87a15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" podUID="afa60deb-188d-4048-ba25-d7cae1d87a15" May 17 00:14:43.065628 containerd[2643]: time="2025-05-17T00:14:43.065599791Z" level=error msg="Failed to destroy network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.065685 containerd[2643]: time="2025-05-17T00:14:43.065645271Z" level=error msg="Failed to destroy network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.065941 containerd[2643]: time="2025-05-17T00:14:43.065920391Z" level=error msg="encountered an error cleaning up failed sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.065986 containerd[2643]: time="2025-05-17T00:14:43.065969431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-2j2bt,Uid:10344652-ca68-479a-86f6-162b29976180,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066046 containerd[2643]: time="2025-05-17T00:14:43.066021191Z" level=error msg="encountered an error cleaning up failed sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066432 kubelet[4097]: E0517 00:14:43.066078 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066432 kubelet[4097]: E0517 00:14:43.066120 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" May 17 00:14:43.066432 kubelet[4097]: E0517 00:14:43.066136 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" May 17 00:14:43.066529 containerd[2643]: time="2025-05-17T00:14:43.066072031Z" level=error msg="Failed to destroy network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066529 containerd[2643]: time="2025-05-17T00:14:43.066111391Z" level=error msg="Failed to destroy network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066529 containerd[2643]: time="2025-05-17T00:14:43.066085591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qstm5,Uid:2e72750b-2053-4375-95c0-ca47f0bf61d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066597 kubelet[4097]: E0517 00:14:43.066170 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847d49c9d7-2j2bt_calico-apiserver(10344652-ca68-479a-86f6-162b29976180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847d49c9d7-2j2bt_calico-apiserver(10344652-ca68-479a-86f6-162b29976180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" podUID="10344652-ca68-479a-86f6-162b29976180" May 17 00:14:43.066597 kubelet[4097]: E0517 00:14:43.066292 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066597 kubelet[4097]: E0517 00:14:43.066342 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qstm5" May 17 00:14:43.066682 kubelet[4097]: E0517 00:14:43.066358 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qstm5" May 17 00:14:43.066682 kubelet[4097]: E0517 00:14:43.066391 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qstm5_kube-system(2e72750b-2053-4375-95c0-ca47f0bf61d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qstm5_kube-system(2e72750b-2053-4375-95c0-ca47f0bf61d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qstm5" podUID="2e72750b-2053-4375-95c0-ca47f0bf61d4" May 17 00:14:43.066734 containerd[2643]: time="2025-05-17T00:14:43.066683991Z" level=error msg="encountered an error cleaning up failed sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066758 containerd[2643]: time="2025-05-17T00:14:43.066734271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-8bsxh,Uid:36199e3b-e706-4256-a6af-e8f5ec5ff5b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066808 containerd[2643]: time="2025-05-17T00:14:43.066701231Z" level=error msg="encountered an error cleaning up failed sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.066853 containerd[2643]: time="2025-05-17T00:14:43.066825991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b45cfbd-l29zd,Uid:701b4956-7c9e-46f9-9aea-31b0e2bd2c8b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.067124 kubelet[4097]: E0517 00:14:43.067102 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.067168 kubelet[4097]: E0517 00:14:43.067136 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76b45cfbd-l29zd" May 17 00:14:43.067168 kubelet[4097]: E0517 00:14:43.067152 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76b45cfbd-l29zd" May 17 00:14:43.067215 kubelet[4097]: E0517 00:14:43.067104 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.067215 kubelet[4097]: E0517 00:14:43.067177 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76b45cfbd-l29zd_calico-system(701b4956-7c9e-46f9-9aea-31b0e2bd2c8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76b45cfbd-l29zd_calico-system(701b4956-7c9e-46f9-9aea-31b0e2bd2c8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76b45cfbd-l29zd" podUID="701b4956-7c9e-46f9-9aea-31b0e2bd2c8b" May 17 00:14:43.067277 kubelet[4097]: E0517 00:14:43.067212 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" May 17 00:14:43.067277 kubelet[4097]: E0517 00:14:43.067233 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" May 17 00:14:43.067277 kubelet[4097]: E0517 00:14:43.067265 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847d49c9d7-8bsxh_calico-apiserver(36199e3b-e706-4256-a6af-e8f5ec5ff5b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847d49c9d7-8bsxh_calico-apiserver(36199e3b-e706-4256-a6af-e8f5ec5ff5b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" podUID="36199e3b-e706-4256-a6af-e8f5ec5ff5b0" May 17 00:14:43.069111 containerd[2643]: time="2025-05-17T00:14:43.069084351Z" level=error msg="Failed to destroy network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.069428 containerd[2643]: time="2025-05-17T00:14:43.069406711Z" level=error msg="encountered an error cleaning up failed sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.069471 containerd[2643]: time="2025-05-17T00:14:43.069453871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-b5fcf,Uid:a41cd5df-5d9c-4907-bb35-9d4adffa8017,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.069582 kubelet[4097]: E0517 00:14:43.069563 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.069615 kubelet[4097]: E0517 00:14:43.069595 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-b5fcf" May 17 00:14:43.069637 kubelet[4097]: E0517 00:14:43.069611 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-b5fcf" May 17 00:14:43.069660 kubelet[4097]: E0517 00:14:43.069641 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:14:43.073293 containerd[2643]: time="2025-05-17T00:14:43.073264071Z" level=error msg="Failed to destroy network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.073598 containerd[2643]: time="2025-05-17T00:14:43.073573672Z" level=error msg="encountered an error cleaning up failed sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.073635 containerd[2643]: time="2025-05-17T00:14:43.073614192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2mhc,Uid:c69e66b4-623c-48d0-af99-ebae9f6f8a0f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.073742 kubelet[4097]: E0517 00:14:43.073720 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.073772 kubelet[4097]: E0517 00:14:43.073759 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t2mhc" May 17 00:14:43.073796 kubelet[4097]: E0517 00:14:43.073777 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t2mhc" May 17 00:14:43.073825 kubelet[4097]: E0517 00:14:43.073808 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t2mhc_kube-system(c69e66b4-623c-48d0-af99-ebae9f6f8a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t2mhc_kube-system(c69e66b4-623c-48d0-af99-ebae9f6f8a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t2mhc" podUID="c69e66b4-623c-48d0-af99-ebae9f6f8a0f" May 17 00:14:43.115732 systemd[1]: Created slice kubepods-besteffort-pod67172858_15b2_4ceb_9630_af18b81413de.slice - libcontainer container kubepods-besteffort-pod67172858_15b2_4ceb_9630_af18b81413de.slice. May 17 00:14:43.117433 containerd[2643]: time="2025-05-17T00:14:43.117400553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2ztl,Uid:67172858-15b2-4ceb-9630-af18b81413de,Namespace:calico-system,Attempt:0,}" May 17 00:14:43.153377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8abcc446c4d00323183229cc8855878ffe4066ab1683e3e1067d597a0f4bff4b-rootfs.mount: Deactivated successfully. May 17 00:14:43.155194 kubelet[4097]: I0517 00:14:43.155163 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:14:43.155651 containerd[2643]: time="2025-05-17T00:14:43.155624835Z" level=info msg="StopPodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\"" May 17 00:14:43.155858 containerd[2643]: time="2025-05-17T00:14:43.155788315Z" level=info msg="Ensure that sandbox a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73 in task-service has been cleanup successfully" May 17 00:14:43.157181 kubelet[4097]: I0517 00:14:43.157166 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:14:43.157233 containerd[2643]: time="2025-05-17T00:14:43.157212715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:14:43.157548 containerd[2643]: time="2025-05-17T00:14:43.157530235Z" level=info msg="StopPodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\"" May 17 00:14:43.157693 containerd[2643]: time="2025-05-17T00:14:43.157680235Z" level=info msg="Ensure that sandbox e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b in task-service has been cleanup successfully" May 17 00:14:43.157908 kubelet[4097]: I0517 00:14:43.157883 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:14:43.158269 containerd[2643]: time="2025-05-17T00:14:43.158249395Z" level=info msg="StopPodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\"" May 17 00:14:43.158401 containerd[2643]: time="2025-05-17T00:14:43.158387835Z" level=info msg="Ensure that sandbox 864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409 in task-service has been cleanup successfully" May 17 00:14:43.158618 kubelet[4097]: I0517 00:14:43.158609 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:14:43.159028 containerd[2643]: time="2025-05-17T00:14:43.159011275Z" level=info msg="StopPodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\"" May 17 00:14:43.159158 containerd[2643]: time="2025-05-17T00:14:43.159144435Z" level=info msg="Ensure that sandbox 1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1 in task-service has been cleanup successfully" May 17 00:14:43.159273 containerd[2643]: time="2025-05-17T00:14:43.159250275Z" level=error msg="Failed to destroy network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.159363 kubelet[4097]: I0517 00:14:43.159349 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:14:43.159592 containerd[2643]: time="2025-05-17T00:14:43.159571515Z" level=error msg="encountered an error cleaning up failed sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.159635 containerd[2643]: time="2025-05-17T00:14:43.159616315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2ztl,Uid:67172858-15b2-4ceb-9630-af18b81413de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.159723 containerd[2643]: time="2025-05-17T00:14:43.159702555Z" level=info msg="StopPodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\"" May 17 00:14:43.159767 kubelet[4097]: E0517 00:14:43.159747 4097 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.159801 kubelet[4097]: E0517 00:14:43.159788 4097 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:43.159826 kubelet[4097]: E0517 00:14:43.159811 4097 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2ztl" May 17 00:14:43.159862 containerd[2643]: time="2025-05-17T00:14:43.159848955Z" level=info msg="Ensure that sandbox cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6 in task-service has been cleanup successfully" May 17 00:14:43.159885 kubelet[4097]: E0517 00:14:43.159843 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b2ztl_calico-system(67172858-15b2-4ceb-9630-af18b81413de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b2ztl_calico-system(67172858-15b2-4ceb-9630-af18b81413de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2ztl" podUID="67172858-15b2-4ceb-9630-af18b81413de" May 17 00:14:43.160279 kubelet[4097]: I0517 00:14:43.160267 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:14:43.160619 containerd[2643]: time="2025-05-17T00:14:43.160597675Z" level=info msg="StopPodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\"" May 17 00:14:43.160761 containerd[2643]: time="2025-05-17T00:14:43.160746115Z" level=info msg="Ensure that sandbox 64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a in task-service has been cleanup successfully" May 17 00:14:43.161008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100-shm.mount: Deactivated successfully. May 17 00:14:43.161086 kubelet[4097]: I0517 00:14:43.161070 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:14:43.161527 containerd[2643]: time="2025-05-17T00:14:43.161500995Z" level=info msg="StopPodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\"" May 17 00:14:43.161668 containerd[2643]: time="2025-05-17T00:14:43.161653075Z" level=info msg="Ensure that sandbox b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76 in task-service has been cleanup successfully" May 17 00:14:43.177179 containerd[2643]: time="2025-05-17T00:14:43.177132996Z" level=error msg="StopPodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" failed" error="failed to destroy network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.177358 kubelet[4097]: E0517 00:14:43.177323 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:14:43.177426 kubelet[4097]: E0517 00:14:43.177387 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73"} May 17 00:14:43.177473 kubelet[4097]: E0517 00:14:43.177443 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e72750b-2053-4375-95c0-ca47f0bf61d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.177552 kubelet[4097]: E0517 00:14:43.177473 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e72750b-2053-4375-95c0-ca47f0bf61d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qstm5" podUID="2e72750b-2053-4375-95c0-ca47f0bf61d4" May 17 00:14:43.178950 containerd[2643]: time="2025-05-17T00:14:43.178917036Z" level=error msg="StopPodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" failed" error="failed to destroy network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.179083 kubelet[4097]: E0517 00:14:43.179065 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:14:43.179132 kubelet[4097]: E0517 00:14:43.179088 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409"} May 17 00:14:43.179132 kubelet[4097]: E0517 00:14:43.179109 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36199e3b-e706-4256-a6af-e8f5ec5ff5b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.179132 kubelet[4097]: E0517 00:14:43.179124 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36199e3b-e706-4256-a6af-e8f5ec5ff5b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" podUID="36199e3b-e706-4256-a6af-e8f5ec5ff5b0" May 17 00:14:43.179317 containerd[2643]: time="2025-05-17T00:14:43.179289116Z" level=error msg="StopPodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" failed" error="failed to destroy network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.179421 kubelet[4097]: E0517 00:14:43.179392 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:14:43.179463 kubelet[4097]: E0517 00:14:43.179431 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b"} May 17 00:14:43.179487 kubelet[4097]: E0517 00:14:43.179463 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a41cd5df-5d9c-4907-bb35-9d4adffa8017\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.179521 kubelet[4097]: E0517 00:14:43.179493 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a41cd5df-5d9c-4907-bb35-9d4adffa8017\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:14:43.180449 containerd[2643]: time="2025-05-17T00:14:43.180420196Z" level=error msg="StopPodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" failed" error="failed to destroy network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.180551 containerd[2643]: time="2025-05-17T00:14:43.180518276Z" level=error msg="StopPodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" failed" error="failed to destroy network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.180588 kubelet[4097]: E0517 00:14:43.180538 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:14:43.180588 kubelet[4097]: E0517 00:14:43.180566 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6"} May 17 00:14:43.180633 kubelet[4097]: E0517 00:14:43.180587 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"afa60deb-188d-4048-ba25-d7cae1d87a15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.180633 kubelet[4097]: E0517 00:14:43.180602 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"afa60deb-188d-4048-ba25-d7cae1d87a15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" podUID="afa60deb-188d-4048-ba25-d7cae1d87a15" May 17 00:14:43.180705 kubelet[4097]: E0517 00:14:43.180630 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:14:43.180705 kubelet[4097]: E0517 00:14:43.180644 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1"} May 17 00:14:43.180705 kubelet[4097]: E0517 00:14:43.180662 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c69e66b4-623c-48d0-af99-ebae9f6f8a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.180705 kubelet[4097]: E0517 00:14:43.180675 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c69e66b4-623c-48d0-af99-ebae9f6f8a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t2mhc" podUID="c69e66b4-623c-48d0-af99-ebae9f6f8a0f" May 17 00:14:43.181626 containerd[2643]: time="2025-05-17T00:14:43.181584476Z" level=error msg="StopPodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" failed" error="failed to destroy network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.181757 kubelet[4097]: E0517 00:14:43.181740 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:14:43.181780 kubelet[4097]: E0517 00:14:43.181760 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a"} May 17 00:14:43.181800 kubelet[4097]: E0517 00:14:43.181780 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.181833 kubelet[4097]: E0517 00:14:43.181796 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76b45cfbd-l29zd" podUID="701b4956-7c9e-46f9-9aea-31b0e2bd2c8b" May 17 00:14:43.182676 containerd[2643]: time="2025-05-17T00:14:43.182648876Z" level=error msg="StopPodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" failed" error="failed to destroy network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:43.182786 kubelet[4097]: E0517 00:14:43.182761 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:14:43.182809 kubelet[4097]: E0517 00:14:43.182797 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76"} May 17 00:14:43.182830 kubelet[4097]: E0517 00:14:43.182822 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10344652-ca68-479a-86f6-162b29976180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:43.182860 kubelet[4097]: E0517 00:14:43.182840 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10344652-ca68-479a-86f6-162b29976180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" podUID="10344652-ca68-479a-86f6-162b29976180" May 17 00:14:44.162915 kubelet[4097]: I0517 00:14:44.162857 4097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:14:44.163354 containerd[2643]: time="2025-05-17T00:14:44.163324160Z" level=info msg="StopPodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\"" May 17 00:14:44.163511 containerd[2643]: time="2025-05-17T00:14:44.163495840Z" level=info msg="Ensure that sandbox 7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100 in task-service has been cleanup successfully" May 17 00:14:44.183485 containerd[2643]: time="2025-05-17T00:14:44.183446321Z" level=error msg="StopPodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" failed" error="failed to destroy network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:44.183650 kubelet[4097]: E0517 00:14:44.183612 4097 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:14:44.183684 kubelet[4097]: E0517 00:14:44.183656 4097 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100"} May 17 00:14:44.183711 kubelet[4097]: E0517 00:14:44.183689 4097 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67172858-15b2-4ceb-9630-af18b81413de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:44.183760 kubelet[4097]: E0517 00:14:44.183710 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67172858-15b2-4ceb-9630-af18b81413de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2ztl" podUID="67172858-15b2-4ceb-9630-af18b81413de" May 17 00:14:44.949296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641507804.mount: Deactivated successfully. May 17 00:14:44.966375 containerd[2643]: time="2025-05-17T00:14:44.966328234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:44.966480 containerd[2643]: time="2025-05-17T00:14:44.966330474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 17 00:14:44.967083 containerd[2643]: time="2025-05-17T00:14:44.967055434Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:44.968729 containerd[2643]: time="2025-05-17T00:14:44.968700955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:44.969360 containerd[2643]: time="2025-05-17T00:14:44.969332155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 1.81208632s" May 17 00:14:44.969387 containerd[2643]: time="2025-05-17T00:14:44.969364315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 00:14:44.974809 containerd[2643]: time="2025-05-17T00:14:44.974781155Z" level=info msg="CreateContainer within sandbox \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:14:44.981148 containerd[2643]: time="2025-05-17T00:14:44.981120035Z" level=info msg="CreateContainer within sandbox \"2f04d389bc03a0ca4fa13d90b17e92dc31dc3227c0288d5bf6c123dbe47f89a2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e74e3e0b7d526586e5b79197bfe4e50e3ed11ceedb240ee15e9c92c2f62a735d\"" May 17 00:14:44.981466 containerd[2643]: time="2025-05-17T00:14:44.981442835Z" level=info msg="StartContainer for \"e74e3e0b7d526586e5b79197bfe4e50e3ed11ceedb240ee15e9c92c2f62a735d\"" May 17 00:14:45.014064 systemd[1]: Started cri-containerd-e74e3e0b7d526586e5b79197bfe4e50e3ed11ceedb240ee15e9c92c2f62a735d.scope - libcontainer container e74e3e0b7d526586e5b79197bfe4e50e3ed11ceedb240ee15e9c92c2f62a735d. May 17 00:14:45.034416 containerd[2643]: time="2025-05-17T00:14:45.034387197Z" level=info msg="StartContainer for \"e74e3e0b7d526586e5b79197bfe4e50e3ed11ceedb240ee15e9c92c2f62a735d\" returns successfully" May 17 00:14:45.166327 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:14:45.166396 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:14:45.177198 kubelet[4097]: I0517 00:14:45.177151 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dbmdq" podStartSLOduration=0.745548694 podStartE2EDuration="6.177135483s" podCreationTimestamp="2025-05-17 00:14:39 +0000 UTC" firstStartedPulling="2025-05-17 00:14:39.538281486 +0000 UTC m=+22.499220641" lastFinishedPulling="2025-05-17 00:14:44.969868275 +0000 UTC m=+27.930807430" observedRunningTime="2025-05-17 00:14:45.176824243 +0000 UTC m=+28.137763398" watchObservedRunningTime="2025-05-17 00:14:45.177135483 +0000 UTC m=+28.138074638" May 17 00:14:45.224921 containerd[2643]: time="2025-05-17T00:14:45.224832165Z" level=info msg="StopPodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\"" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.262 [INFO][6191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.263 [INFO][6191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" iface="eth0" netns="/var/run/netns/cni-303e3b52-b5e6-54d2-2f9e-25a040543df9" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.263 [INFO][6191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" iface="eth0" netns="/var/run/netns/cni-303e3b52-b5e6-54d2-2f9e-25a040543df9" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.263 [INFO][6191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" iface="eth0" netns="/var/run/netns/cni-303e3b52-b5e6-54d2-2f9e-25a040543df9" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.263 [INFO][6191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.263 [INFO][6191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.296 [INFO][6224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.296 [INFO][6224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.296 [INFO][6224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.303 [WARNING][6224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.303 [INFO][6224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.304 [INFO][6224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:45.307954 containerd[2643]: 2025-05-17 00:14:45.306 [INFO][6191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:14:45.308265 containerd[2643]: time="2025-05-17T00:14:45.308080008Z" level=info msg="TearDown network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" successfully" May 17 00:14:45.308265 containerd[2643]: time="2025-05-17T00:14:45.308104688Z" level=info msg="StopPodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" returns successfully" May 17 00:14:45.365868 kubelet[4097]: I0517 00:14:45.365836 4097 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7jst\" (UniqueName: \"kubernetes.io/projected/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-kube-api-access-n7jst\") pod \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\" (UID: \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\") " May 17 00:14:45.365868 kubelet[4097]: I0517 00:14:45.365871 4097 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-backend-key-pair\") pod \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\" (UID: \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\") " May 17 00:14:45.368192 kubelet[4097]: I0517 00:14:45.368161 4097 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-kube-api-access-n7jst" (OuterVolumeSpecName: "kube-api-access-n7jst") pod "701b4956-7c9e-46f9-9aea-31b0e2bd2c8b" (UID: "701b4956-7c9e-46f9-9aea-31b0e2bd2c8b"). InnerVolumeSpecName "kube-api-access-n7jst". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:14:45.368242 kubelet[4097]: I0517 00:14:45.368219 4097 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "701b4956-7c9e-46f9-9aea-31b0e2bd2c8b" (UID: "701b4956-7c9e-46f9-9aea-31b0e2bd2c8b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:14:45.466412 kubelet[4097]: I0517 00:14:45.466389 4097 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-ca-bundle\") pod \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\" (UID: \"701b4956-7c9e-46f9-9aea-31b0e2bd2c8b\") " May 17 00:14:45.466474 kubelet[4097]: I0517 00:14:45.466454 4097 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7jst\" (UniqueName: \"kubernetes.io/projected/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-kube-api-access-n7jst\") on node \"ci-4081.3.3-n-02409cc2a5\" DevicePath \"\"" May 17 00:14:45.466474 kubelet[4097]: I0517 00:14:45.466465 4097 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-02409cc2a5\" DevicePath \"\"" May 17 00:14:45.466757 kubelet[4097]: I0517 00:14:45.466728 4097 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "701b4956-7c9e-46f9-9aea-31b0e2bd2c8b" (UID: "701b4956-7c9e-46f9-9aea-31b0e2bd2c8b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:14:45.566808 kubelet[4097]: I0517 00:14:45.566778 4097 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b-whisker-ca-bundle\") on node \"ci-4081.3.3-n-02409cc2a5\" DevicePath \"\"" May 17 00:14:45.950302 systemd[1]: run-netns-cni\x2d303e3b52\x2db5e6\x2d54d2\x2d2f9e\x2d25a040543df9.mount: Deactivated successfully. May 17 00:14:45.950380 systemd[1]: var-lib-kubelet-pods-701b4956\x2d7c9e\x2d46f9\x2d9aea\x2d31b0e2bd2c8b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn7jst.mount: Deactivated successfully. May 17 00:14:45.950443 systemd[1]: var-lib-kubelet-pods-701b4956\x2d7c9e\x2d46f9\x2d9aea\x2d31b0e2bd2c8b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:14:46.170201 systemd[1]: Removed slice kubepods-besteffort-pod701b4956_7c9e_46f9_9aea_31b0e2bd2c8b.slice - libcontainer container kubepods-besteffort-pod701b4956_7c9e_46f9_9aea_31b0e2bd2c8b.slice. May 17 00:14:46.209285 systemd[1]: Created slice kubepods-besteffort-podf9ad7e26_3a56_408f_a437_28e846a147e2.slice - libcontainer container kubepods-besteffort-podf9ad7e26_3a56_408f_a437_28e846a147e2.slice. May 17 00:14:46.270692 kubelet[4097]: I0517 00:14:46.270656 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9ad7e26-3a56-408f-a437-28e846a147e2-whisker-ca-bundle\") pod \"whisker-6758cf69db-8wts4\" (UID: \"f9ad7e26-3a56-408f-a437-28e846a147e2\") " pod="calico-system/whisker-6758cf69db-8wts4" May 17 00:14:46.271054 kubelet[4097]: I0517 00:14:46.270704 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ptjq\" (UniqueName: \"kubernetes.io/projected/f9ad7e26-3a56-408f-a437-28e846a147e2-kube-api-access-5ptjq\") pod \"whisker-6758cf69db-8wts4\" (UID: \"f9ad7e26-3a56-408f-a437-28e846a147e2\") " pod="calico-system/whisker-6758cf69db-8wts4" May 17 00:14:46.271054 kubelet[4097]: I0517 00:14:46.270734 4097 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9ad7e26-3a56-408f-a437-28e846a147e2-whisker-backend-key-pair\") pod \"whisker-6758cf69db-8wts4\" (UID: \"f9ad7e26-3a56-408f-a437-28e846a147e2\") " pod="calico-system/whisker-6758cf69db-8wts4" May 17 00:14:46.511710 containerd[2643]: time="2025-05-17T00:14:46.511599135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6758cf69db-8wts4,Uid:f9ad7e26-3a56-408f-a437-28e846a147e2,Namespace:calico-system,Attempt:0,}" May 17 00:14:46.607212 systemd-networkd[2553]: cali4ad03ffa8ef: Link UP May 17 00:14:46.607421 systemd-networkd[2553]: cali4ad03ffa8ef: Gained carrier May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.532 [INFO][6525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.543 [INFO][6525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0 whisker-6758cf69db- calico-system f9ad7e26-3a56-408f-a437-28e846a147e2 851 0 2025-05-17 00:14:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6758cf69db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 whisker-6758cf69db-8wts4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4ad03ffa8ef [] [] }} ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.543 [INFO][6525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.563 [INFO][6551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" HandleID="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.563 [INFO][6551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" HandleID="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000199a70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"whisker-6758cf69db-8wts4", "timestamp":"2025-05-17 00:14:46.563671617 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.563 [INFO][6551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.563 [INFO][6551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.563 [INFO][6551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.572 [INFO][6551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.575 [INFO][6551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.578 [INFO][6551] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.579 [INFO][6551] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.581 [INFO][6551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.581 [INFO][6551] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.582 [INFO][6551] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822 May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.584 [INFO][6551] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.600 [INFO][6551] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.65/26] block=192.168.69.64/26 handle="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.600 [INFO][6551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.65/26] handle="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.600 [INFO][6551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:46.615059 containerd[2643]: 2025-05-17 00:14:46.600 [INFO][6551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.65/26] IPv6=[] ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" HandleID="k8s-pod-network.9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.615450 containerd[2643]: 2025-05-17 00:14:46.601 [INFO][6525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0", GenerateName:"whisker-6758cf69db-", Namespace:"calico-system", SelfLink:"", UID:"f9ad7e26-3a56-408f-a437-28e846a147e2", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6758cf69db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"whisker-6758cf69db-8wts4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ad03ffa8ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:46.615450 containerd[2643]: 2025-05-17 00:14:46.602 [INFO][6525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.65/32] ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.615450 containerd[2643]: 2025-05-17 00:14:46.602 [INFO][6525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ad03ffa8ef ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.615450 containerd[2643]: 2025-05-17 00:14:46.607 [INFO][6525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.615450 containerd[2643]: 2025-05-17 00:14:46.607 [INFO][6525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0", GenerateName:"whisker-6758cf69db-", Namespace:"calico-system", SelfLink:"", UID:"f9ad7e26-3a56-408f-a437-28e846a147e2", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6758cf69db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822", Pod:"whisker-6758cf69db-8wts4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ad03ffa8ef", MAC:"9a:e8:a7:db:a7:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:46.615450 containerd[2643]: 2025-05-17 00:14:46.613 [INFO][6525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822" Namespace="calico-system" Pod="whisker-6758cf69db-8wts4" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--6758cf69db--8wts4-eth0" May 17 00:14:46.627458 containerd[2643]: time="2025-05-17T00:14:46.627109979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:46.627486 containerd[2643]: time="2025-05-17T00:14:46.627450499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:46.627486 containerd[2643]: time="2025-05-17T00:14:46.627463619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:46.627555 containerd[2643]: time="2025-05-17T00:14:46.627536539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:46.655078 systemd[1]: Started cri-containerd-9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822.scope - libcontainer container 9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822. May 17 00:14:46.678027 containerd[2643]: time="2025-05-17T00:14:46.677989701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6758cf69db-8wts4,Uid:f9ad7e26-3a56-408f-a437-28e846a147e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fc5b1aff467ad0b41601732d4d86ea31ede91cd5c7e7b08f3f6d78fcea36822\"" May 17 00:14:46.679021 containerd[2643]: time="2025-05-17T00:14:46.678998381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:14:46.725290 containerd[2643]: time="2025-05-17T00:14:46.725247663Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:46.725509 containerd[2643]: time="2025-05-17T00:14:46.725479783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:46.725567 containerd[2643]: time="2025-05-17T00:14:46.725543423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:14:46.725708 kubelet[4097]: E0517 00:14:46.725651 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:14:46.725765 kubelet[4097]: E0517 00:14:46.725726 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:14:46.725948 kubelet[4097]: E0517 00:14:46.725916 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d78f9656fa7f429e98165d5566619fe2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:46.727493 containerd[2643]: time="2025-05-17T00:14:46.727472823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:14:46.752632 containerd[2643]: time="2025-05-17T00:14:46.752591264Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:46.752848 containerd[2643]: time="2025-05-17T00:14:46.752823264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:46.752914 containerd[2643]: time="2025-05-17T00:14:46.752888024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:14:46.753004 kubelet[4097]: E0517 00:14:46.752970 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:14:46.753040 kubelet[4097]: E0517 00:14:46.753014 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:14:46.753184 kubelet[4097]: E0517 00:14:46.753132 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:46.754330 kubelet[4097]: E0517 00:14:46.754294 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:14:47.115886 kubelet[4097]: I0517 00:14:47.115856 4097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701b4956-7c9e-46f9-9aea-31b0e2bd2c8b" path="/var/lib/kubelet/pods/701b4956-7c9e-46f9-9aea-31b0e2bd2c8b/volumes" May 17 00:14:47.169777 kubelet[4097]: E0517 00:14:47.169744 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:14:48.171543 kubelet[4097]: E0517 00:14:48.171504 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:14:48.541027 systemd-networkd[2553]: cali4ad03ffa8ef: Gained IPv6LL May 17 00:14:54.112389 containerd[2643]: time="2025-05-17T00:14:54.112277359Z" level=info msg="StopPodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\"" May 17 00:14:54.112389 containerd[2643]: time="2025-05-17T00:14:54.112276879Z" level=info msg="StopPodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\"" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" iface="eth0" netns="/var/run/netns/cni-960dde82-3348-2d61-b7fa-b792f5d2d849" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" iface="eth0" netns="/var/run/netns/cni-960dde82-3348-2d61-b7fa-b792f5d2d849" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.152 [INFO][7077] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" iface="eth0" netns="/var/run/netns/cni-960dde82-3348-2d61-b7fa-b792f5d2d849" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.152 [INFO][7077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.152 [INFO][7077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.168 [INFO][7115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.169 [INFO][7115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.169 [INFO][7115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.177 [WARNING][7115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.177 [INFO][7115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.178 [INFO][7115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:54.180333 containerd[2643]: 2025-05-17 00:14:54.179 [INFO][7077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:14:54.180720 containerd[2643]: time="2025-05-17T00:14:54.180489640Z" level=info msg="TearDown network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" successfully" May 17 00:14:54.180720 containerd[2643]: time="2025-05-17T00:14:54.180513360Z" level=info msg="StopPodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" returns successfully" May 17 00:14:54.180928 containerd[2643]: time="2025-05-17T00:14:54.180904920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-2j2bt,Uid:10344652-ca68-479a-86f6-162b29976180,Namespace:calico-apiserver,Attempt:1,}" May 17 00:14:54.182398 systemd[1]: run-netns-cni\x2d960dde82\x2d3348\x2d2d61\x2db7fa\x2db792f5d2d849.mount: Deactivated successfully. May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7076] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" iface="eth0" netns="/var/run/netns/cni-25c2c71c-873e-38e4-f7a0-43834e93403e" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7076] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" iface="eth0" netns="/var/run/netns/cni-25c2c71c-873e-38e4-f7a0-43834e93403e" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7076] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" iface="eth0" netns="/var/run/netns/cni-25c2c71c-873e-38e4-f7a0-43834e93403e" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.151 [INFO][7076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.168 [INFO][7113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.169 [INFO][7113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.178 [INFO][7113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.185 [WARNING][7113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.185 [INFO][7113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.186 [INFO][7113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:54.189152 containerd[2643]: 2025-05-17 00:14:54.187 [INFO][7076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:14:54.189433 containerd[2643]: time="2025-05-17T00:14:54.189273921Z" level=info msg="TearDown network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" successfully" May 17 00:14:54.189433 containerd[2643]: time="2025-05-17T00:14:54.189293801Z" level=info msg="StopPodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" returns successfully" May 17 00:14:54.189699 containerd[2643]: time="2025-05-17T00:14:54.189675201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2mhc,Uid:c69e66b4-623c-48d0-af99-ebae9f6f8a0f,Namespace:kube-system,Attempt:1,}" May 17 00:14:54.190881 systemd[1]: run-netns-cni\x2d25c2c71c\x2d873e\x2d38e4\x2df7a0\x2d43834e93403e.mount: Deactivated successfully. May 17 00:14:54.265311 systemd-networkd[2553]: calie3e12b0e593: Link UP May 17 00:14:54.265539 systemd-networkd[2553]: calie3e12b0e593: Gained carrier May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.202 [INFO][7155] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.217 [INFO][7155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0 calico-apiserver-847d49c9d7- calico-apiserver 10344652-ca68-479a-86f6-162b29976180 897 0 2025-05-17 00:14:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847d49c9d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 calico-apiserver-847d49c9d7-2j2bt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3e12b0e593 [] [] }} ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.217 [INFO][7155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.237 [INFO][7211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" HandleID="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.238 [INFO][7211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" HandleID="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40007b1460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"calico-apiserver-847d49c9d7-2j2bt", "timestamp":"2025-05-17 00:14:54.237905282 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.238 [INFO][7211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.238 [INFO][7211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.238 [INFO][7211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.246 [INFO][7211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.249 [INFO][7211] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.252 [INFO][7211] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.253 [INFO][7211] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.255 [INFO][7211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.255 [INFO][7211] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.256 [INFO][7211] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.258 [INFO][7211] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.262 [INFO][7211] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.66/26] block=192.168.69.64/26 handle="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.262 [INFO][7211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.66/26] handle="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.262 [INFO][7211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:54.273869 containerd[2643]: 2025-05-17 00:14:54.262 [INFO][7211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.66/26] IPv6=[] ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" HandleID="k8s-pod-network.e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.274495 containerd[2643]: 2025-05-17 00:14:54.263 [INFO][7155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"10344652-ca68-479a-86f6-162b29976180", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"calico-apiserver-847d49c9d7-2j2bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e12b0e593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:54.274495 containerd[2643]: 2025-05-17 00:14:54.264 [INFO][7155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.66/32] ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.274495 containerd[2643]: 2025-05-17 00:14:54.264 [INFO][7155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3e12b0e593 ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.274495 containerd[2643]: 2025-05-17 00:14:54.265 [INFO][7155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.274495 containerd[2643]: 2025-05-17 00:14:54.265 [INFO][7155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"10344652-ca68-479a-86f6-162b29976180", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d", Pod:"calico-apiserver-847d49c9d7-2j2bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e12b0e593", MAC:"fe:bc:a7:63:44:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:54.274495 containerd[2643]: 2025-05-17 00:14:54.272 [INFO][7155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-2j2bt" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:14:54.286709 containerd[2643]: time="2025-05-17T00:14:54.286343403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:54.286709 containerd[2643]: time="2025-05-17T00:14:54.286684523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:54.286709 containerd[2643]: time="2025-05-17T00:14:54.286697963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:54.286794 containerd[2643]: time="2025-05-17T00:14:54.286777203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:54.308078 systemd[1]: Started cri-containerd-e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d.scope - libcontainer container e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d. May 17 00:14:54.331209 containerd[2643]: time="2025-05-17T00:14:54.331175604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-2j2bt,Uid:10344652-ca68-479a-86f6-162b29976180,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d\"" May 17 00:14:54.332240 containerd[2643]: time="2025-05-17T00:14:54.332215764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:14:54.376012 systemd-networkd[2553]: cali26e9a95076b: Link UP May 17 00:14:54.376283 systemd-networkd[2553]: cali26e9a95076b: Gained carrier May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.208 [INFO][7171] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.220 [INFO][7171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0 coredns-668d6bf9bc- kube-system c69e66b4-623c-48d0-af99-ebae9f6f8a0f 896 0 2025-05-17 00:14:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 coredns-668d6bf9bc-t2mhc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26e9a95076b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.220 [INFO][7171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.239 [INFO][7217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" HandleID="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.239 [INFO][7217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" HandleID="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000366fa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"coredns-668d6bf9bc-t2mhc", "timestamp":"2025-05-17 00:14:54.239007842 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.239 [INFO][7217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.262 [INFO][7217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.262 [INFO][7217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.347 [INFO][7217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.351 [INFO][7217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.354 [INFO][7217] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.356 [INFO][7217] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.357 [INFO][7217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.357 [INFO][7217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.358 [INFO][7217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1 May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.369 [INFO][7217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.372 [INFO][7217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.67/26] block=192.168.69.64/26 handle="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.372 [INFO][7217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.67/26] handle="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.372 [INFO][7217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:54.383901 containerd[2643]: 2025-05-17 00:14:54.372 [INFO][7217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.67/26] IPv6=[] ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" HandleID="k8s-pod-network.f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.384355 containerd[2643]: 2025-05-17 00:14:54.374 [INFO][7171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c69e66b4-623c-48d0-af99-ebae9f6f8a0f", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"coredns-668d6bf9bc-t2mhc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26e9a95076b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:54.384355 containerd[2643]: 2025-05-17 00:14:54.374 [INFO][7171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.67/32] ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.384355 containerd[2643]: 2025-05-17 00:14:54.374 [INFO][7171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26e9a95076b ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.384355 containerd[2643]: 2025-05-17 00:14:54.376 [INFO][7171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.384355 containerd[2643]: 2025-05-17 00:14:54.376 [INFO][7171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c69e66b4-623c-48d0-af99-ebae9f6f8a0f", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1", Pod:"coredns-668d6bf9bc-t2mhc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26e9a95076b", MAC:"c2:07:7f:a7:d7:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:54.384355 containerd[2643]: 2025-05-17 00:14:54.382 [INFO][7171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2mhc" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:14:54.396536 containerd[2643]: time="2025-05-17T00:14:54.396475965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:54.396536 containerd[2643]: time="2025-05-17T00:14:54.396530845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:54.396612 containerd[2643]: time="2025-05-17T00:14:54.396542085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:54.396634 containerd[2643]: time="2025-05-17T00:14:54.396620125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:54.428083 systemd[1]: Started cri-containerd-f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1.scope - libcontainer container f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1. May 17 00:14:54.451220 containerd[2643]: time="2025-05-17T00:14:54.451180486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2mhc,Uid:c69e66b4-623c-48d0-af99-ebae9f6f8a0f,Namespace:kube-system,Attempt:1,} returns sandbox id \"f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1\"" May 17 00:14:54.452916 containerd[2643]: time="2025-05-17T00:14:54.452898367Z" level=info msg="CreateContainer within sandbox \"f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:14:54.458990 containerd[2643]: time="2025-05-17T00:14:54.458958527Z" level=info msg="CreateContainer within sandbox \"f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"862b812f6c5c3d42b05501746858b7ec9d4ee7b2e1ddf0acf059c27a0970fe3b\"" May 17 00:14:54.459302 containerd[2643]: time="2025-05-17T00:14:54.459280007Z" level=info msg="StartContainer for \"862b812f6c5c3d42b05501746858b7ec9d4ee7b2e1ddf0acf059c27a0970fe3b\"" May 17 00:14:54.484074 systemd[1]: Started cri-containerd-862b812f6c5c3d42b05501746858b7ec9d4ee7b2e1ddf0acf059c27a0970fe3b.scope - libcontainer container 862b812f6c5c3d42b05501746858b7ec9d4ee7b2e1ddf0acf059c27a0970fe3b. May 17 00:14:54.501233 containerd[2643]: time="2025-05-17T00:14:54.501203968Z" level=info msg="StartContainer for \"862b812f6c5c3d42b05501746858b7ec9d4ee7b2e1ddf0acf059c27a0970fe3b\" returns successfully" May 17 00:14:54.974442 containerd[2643]: time="2025-05-17T00:14:54.974404058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:54.974589 containerd[2643]: time="2025-05-17T00:14:54.974430298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 17 00:14:54.975213 containerd[2643]: time="2025-05-17T00:14:54.975189298Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:54.976964 containerd[2643]: time="2025-05-17T00:14:54.976936898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:54.977676 containerd[2643]: time="2025-05-17T00:14:54.977647978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 645.402014ms" May 17 00:14:54.977700 containerd[2643]: time="2025-05-17T00:14:54.977681338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:14:54.979874 containerd[2643]: time="2025-05-17T00:14:54.979841378Z" level=info msg="CreateContainer within sandbox \"e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:14:54.985867 containerd[2643]: time="2025-05-17T00:14:54.985836098Z" level=info msg="CreateContainer within sandbox \"e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4ec463cc8113c6effb8e53b7a3de8fa9b88c129c5a2a6b7662a21d7787a50a12\"" May 17 00:14:54.986268 containerd[2643]: time="2025-05-17T00:14:54.986238858Z" level=info msg="StartContainer for \"4ec463cc8113c6effb8e53b7a3de8fa9b88c129c5a2a6b7662a21d7787a50a12\"" May 17 00:14:55.014064 systemd[1]: Started cri-containerd-4ec463cc8113c6effb8e53b7a3de8fa9b88c129c5a2a6b7662a21d7787a50a12.scope - libcontainer container 4ec463cc8113c6effb8e53b7a3de8fa9b88c129c5a2a6b7662a21d7787a50a12. May 17 00:14:55.038009 containerd[2643]: time="2025-05-17T00:14:55.037978539Z" level=info msg="StartContainer for \"4ec463cc8113c6effb8e53b7a3de8fa9b88c129c5a2a6b7662a21d7787a50a12\" returns successfully" May 17 00:14:55.112279 containerd[2643]: time="2025-05-17T00:14:55.112234381Z" level=info msg="StopPodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\"" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.146 [INFO][7542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.146 [INFO][7542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" iface="eth0" netns="/var/run/netns/cni-3fa98c98-bfed-b11b-5c77-3aa83a8697f8" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.147 [INFO][7542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" iface="eth0" netns="/var/run/netns/cni-3fa98c98-bfed-b11b-5c77-3aa83a8697f8" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.147 [INFO][7542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" iface="eth0" netns="/var/run/netns/cni-3fa98c98-bfed-b11b-5c77-3aa83a8697f8" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.147 [INFO][7542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.147 [INFO][7542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.164 [INFO][7567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.164 [INFO][7567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.164 [INFO][7567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.172 [WARNING][7567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.172 [INFO][7567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.173 [INFO][7567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:55.175737 containerd[2643]: 2025-05-17 00:14:55.174 [INFO][7542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:14:55.176267 containerd[2643]: time="2025-05-17T00:14:55.175941902Z" level=info msg="TearDown network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" successfully" May 17 00:14:55.176267 containerd[2643]: time="2025-05-17T00:14:55.175972862Z" level=info msg="StopPodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" returns successfully" May 17 00:14:55.176480 containerd[2643]: time="2025-05-17T00:14:55.176458342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2ztl,Uid:67172858-15b2-4ceb-9630-af18b81413de,Namespace:calico-system,Attempt:1,}" May 17 00:14:55.186380 systemd[1]: run-netns-cni\x2d3fa98c98\x2dbfed\x2db11b\x2d5c77\x2d3aa83a8697f8.mount: Deactivated successfully. May 17 00:14:55.191580 kubelet[4097]: I0517 00:14:55.191533 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t2mhc" podStartSLOduration=31.191517663 podStartE2EDuration="31.191517663s" podCreationTimestamp="2025-05-17 00:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:55.191354103 +0000 UTC m=+38.152293258" watchObservedRunningTime="2025-05-17 00:14:55.191517663 +0000 UTC m=+38.152456778" May 17 00:14:55.205723 kubelet[4097]: I0517 00:14:55.205678 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-847d49c9d7-2j2bt" podStartSLOduration=20.559435769 podStartE2EDuration="21.205665423s" podCreationTimestamp="2025-05-17 00:14:34 +0000 UTC" firstStartedPulling="2025-05-17 00:14:54.332020124 +0000 UTC m=+37.292959239" lastFinishedPulling="2025-05-17 00:14:54.978249738 +0000 UTC m=+37.939188893" observedRunningTime="2025-05-17 00:14:55.205413503 +0000 UTC m=+38.166352658" watchObservedRunningTime="2025-05-17 00:14:55.205665423 +0000 UTC m=+38.166604578" May 17 00:14:55.258216 systemd-networkd[2553]: cali3c25fb25eca: Link UP May 17 00:14:55.259263 systemd-networkd[2553]: cali3c25fb25eca: Gained carrier May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.196 [INFO][7589] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.206 [INFO][7589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0 csi-node-driver- calico-system 67172858-15b2-4ceb-9630-af18b81413de 914 0 2025-05-17 00:14:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 csi-node-driver-b2ztl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3c25fb25eca [] [] }} ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.207 [INFO][7589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.227 [INFO][7619] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" HandleID="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.227 [INFO][7619] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" HandleID="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dc30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"csi-node-driver-b2ztl", "timestamp":"2025-05-17 00:14:55.227277463 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.227 [INFO][7619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.227 [INFO][7619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.227 [INFO][7619] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.235 [INFO][7619] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.238 [INFO][7619] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.241 [INFO][7619] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.242 [INFO][7619] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.244 [INFO][7619] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.244 [INFO][7619] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.249 [INFO][7619] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6 May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.251 [INFO][7619] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.255 [INFO][7619] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.68/26] block=192.168.69.64/26 handle="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.255 [INFO][7619] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.68/26] handle="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.255 [INFO][7619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:55.267762 containerd[2643]: 2025-05-17 00:14:55.255 [INFO][7619] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.68/26] IPv6=[] ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" HandleID="k8s-pod-network.e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.268290 containerd[2643]: 2025-05-17 00:14:55.256 [INFO][7589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67172858-15b2-4ceb-9630-af18b81413de", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"csi-node-driver-b2ztl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c25fb25eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:55.268290 containerd[2643]: 2025-05-17 00:14:55.257 [INFO][7589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.68/32] ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.268290 containerd[2643]: 2025-05-17 00:14:55.257 [INFO][7589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c25fb25eca ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.268290 containerd[2643]: 2025-05-17 00:14:55.258 [INFO][7589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.268290 containerd[2643]: 2025-05-17 00:14:55.259 [INFO][7589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67172858-15b2-4ceb-9630-af18b81413de", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6", Pod:"csi-node-driver-b2ztl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c25fb25eca", MAC:"de:32:68:b6:3d:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:55.268290 containerd[2643]: 2025-05-17 00:14:55.265 [INFO][7589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6" Namespace="calico-system" Pod="csi-node-driver-b2ztl" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:14:55.279724 containerd[2643]: time="2025-05-17T00:14:55.279665185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:55.279724 containerd[2643]: time="2025-05-17T00:14:55.279719025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:55.279773 containerd[2643]: time="2025-05-17T00:14:55.279730625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:55.279818 containerd[2643]: time="2025-05-17T00:14:55.279801585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:55.307101 systemd[1]: Started cri-containerd-e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6.scope - libcontainer container e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6. May 17 00:14:55.323313 containerd[2643]: time="2025-05-17T00:14:55.323281905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2ztl,Uid:67172858-15b2-4ceb-9630-af18b81413de,Namespace:calico-system,Attempt:1,} returns sandbox id \"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6\"" May 17 00:14:55.324317 containerd[2643]: time="2025-05-17T00:14:55.324294985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:14:55.615398 containerd[2643]: time="2025-05-17T00:14:55.615358792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:55.615483 containerd[2643]: time="2025-05-17T00:14:55.615421072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 17 00:14:55.616125 containerd[2643]: time="2025-05-17T00:14:55.616101912Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:55.617787 containerd[2643]: time="2025-05-17T00:14:55.617762512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:55.618507 containerd[2643]: time="2025-05-17T00:14:55.618480312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 294.155967ms" May 17 00:14:55.618547 containerd[2643]: time="2025-05-17T00:14:55.618512512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 00:14:55.620094 containerd[2643]: time="2025-05-17T00:14:55.620070312Z" level=info msg="CreateContainer within sandbox \"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:14:55.626142 containerd[2643]: time="2025-05-17T00:14:55.626111112Z" level=info msg="CreateContainer within sandbox \"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"09b01bd5292964fc9852e3ca3d116fed8618c83440ab71f54296b00409d4a58b\"" May 17 00:14:55.626471 containerd[2643]: time="2025-05-17T00:14:55.626440152Z" level=info msg="StartContainer for \"09b01bd5292964fc9852e3ca3d116fed8618c83440ab71f54296b00409d4a58b\"" May 17 00:14:55.655074 systemd[1]: Started cri-containerd-09b01bd5292964fc9852e3ca3d116fed8618c83440ab71f54296b00409d4a58b.scope - libcontainer container 09b01bd5292964fc9852e3ca3d116fed8618c83440ab71f54296b00409d4a58b. May 17 00:14:55.683284 containerd[2643]: time="2025-05-17T00:14:55.683246953Z" level=info msg="StartContainer for \"09b01bd5292964fc9852e3ca3d116fed8618c83440ab71f54296b00409d4a58b\" returns successfully" May 17 00:14:55.684044 containerd[2643]: time="2025-05-17T00:14:55.684025353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:14:55.772982 systemd-networkd[2553]: calie3e12b0e593: Gained IPv6LL May 17 00:14:56.017569 containerd[2643]: time="2025-05-17T00:14:56.017458440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:56.017569 containerd[2643]: time="2025-05-17T00:14:56.017543520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 17 00:14:56.018325 containerd[2643]: time="2025-05-17T00:14:56.018304040Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:56.020229 containerd[2643]: time="2025-05-17T00:14:56.020207800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:56.021049 containerd[2643]: time="2025-05-17T00:14:56.021019040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 336.963887ms" May 17 00:14:56.021081 containerd[2643]: time="2025-05-17T00:14:56.021056360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 00:14:56.022723 containerd[2643]: time="2025-05-17T00:14:56.022701720Z" level=info msg="CreateContainer within sandbox \"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:14:56.028069 containerd[2643]: time="2025-05-17T00:14:56.028039800Z" level=info msg="CreateContainer within sandbox \"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"977797239feb6cae6f23f7088e10de5307f1fde7e315c562abd2100d195c2e93\"" May 17 00:14:56.028367 containerd[2643]: time="2025-05-17T00:14:56.028338440Z" level=info msg="StartContainer for \"977797239feb6cae6f23f7088e10de5307f1fde7e315c562abd2100d195c2e93\"" May 17 00:14:56.061013 systemd[1]: Started cri-containerd-977797239feb6cae6f23f7088e10de5307f1fde7e315c562abd2100d195c2e93.scope - libcontainer container 977797239feb6cae6f23f7088e10de5307f1fde7e315c562abd2100d195c2e93. May 17 00:14:56.079809 containerd[2643]: time="2025-05-17T00:14:56.079778521Z" level=info msg="StartContainer for \"977797239feb6cae6f23f7088e10de5307f1fde7e315c562abd2100d195c2e93\" returns successfully" May 17 00:14:56.092978 systemd-networkd[2553]: cali26e9a95076b: Gained IPv6LL May 17 00:14:56.157914 kubelet[4097]: I0517 00:14:56.157884 4097 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:14:56.157914 kubelet[4097]: I0517 00:14:56.157916 4097 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:14:56.187408 kubelet[4097]: I0517 00:14:56.187375 4097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:14:56.195625 kubelet[4097]: I0517 00:14:56.195582 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b2ztl" podStartSLOduration=16.498030828 podStartE2EDuration="17.195551443s" podCreationTimestamp="2025-05-17 00:14:39 +0000 UTC" firstStartedPulling="2025-05-17 00:14:55.324104785 +0000 UTC m=+38.285043940" lastFinishedPulling="2025-05-17 00:14:56.0216254 +0000 UTC m=+38.982564555" observedRunningTime="2025-05-17 00:14:56.195361203 +0000 UTC m=+39.156300358" watchObservedRunningTime="2025-05-17 00:14:56.195551443 +0000 UTC m=+39.156490598" May 17 00:14:57.111983 containerd[2643]: time="2025-05-17T00:14:57.111930541Z" level=info msg="StopPodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\"" May 17 00:14:57.112370 containerd[2643]: time="2025-05-17T00:14:57.112034741Z" level=info msg="StopPodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\"" May 17 00:14:57.112370 containerd[2643]: time="2025-05-17T00:14:57.111937301Z" level=info msg="StopPodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\"" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.146 [INFO][7949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.146 [INFO][7949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" iface="eth0" netns="/var/run/netns/cni-90a9e8a5-078f-e06d-9045-634a4cdf0469" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.147 [INFO][7949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" iface="eth0" netns="/var/run/netns/cni-90a9e8a5-078f-e06d-9045-634a4cdf0469" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.147 [INFO][7949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" iface="eth0" netns="/var/run/netns/cni-90a9e8a5-078f-e06d-9045-634a4cdf0469" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.147 [INFO][7949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.147 [INFO][7949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.164 [INFO][8010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.164 [INFO][8010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.164 [INFO][8010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.172 [WARNING][8010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.172 [INFO][8010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.173 [INFO][8010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.175436 containerd[2643]: 2025-05-17 00:14:57.174 [INFO][7949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:14:57.175842 containerd[2643]: time="2025-05-17T00:14:57.175709582Z" level=info msg="TearDown network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" successfully" May 17 00:14:57.175842 containerd[2643]: time="2025-05-17T00:14:57.175736102Z" level=info msg="StopPodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" returns successfully" May 17 00:14:57.176312 containerd[2643]: time="2025-05-17T00:14:57.176288222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6666cd8d-5cpvh,Uid:afa60deb-188d-4048-ba25-d7cae1d87a15,Namespace:calico-system,Attempt:1,}" May 17 00:14:57.177657 systemd[1]: run-netns-cni\x2d90a9e8a5\x2d078f\x2de06d\x2d9045\x2d634a4cdf0469.mount: Deactivated successfully. May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.149 [INFO][7950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7950] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" iface="eth0" netns="/var/run/netns/cni-960fc892-e4a5-1d00-619b-21abb7d25a94" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7950] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" iface="eth0" netns="/var/run/netns/cni-960fc892-e4a5-1d00-619b-21abb7d25a94" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7950] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" iface="eth0" netns="/var/run/netns/cni-960fc892-e4a5-1d00-619b-21abb7d25a94" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.166 [INFO][8016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.166 [INFO][8016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.173 [INFO][8016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.180 [WARNING][8016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.180 [INFO][8016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.181 [INFO][8016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.184002 containerd[2643]: 2025-05-17 00:14:57.182 [INFO][7950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:14:57.184290 containerd[2643]: time="2025-05-17T00:14:57.184251423Z" level=info msg="TearDown network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" successfully" May 17 00:14:57.184290 containerd[2643]: time="2025-05-17T00:14:57.184278663Z" level=info msg="StopPodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" returns successfully" May 17 00:14:57.184847 containerd[2643]: time="2025-05-17T00:14:57.184822503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qstm5,Uid:2e72750b-2053-4375-95c0-ca47f0bf61d4,Namespace:kube-system,Attempt:1,}" May 17 00:14:57.186120 systemd[1]: run-netns-cni\x2d960fc892\x2de4a5\x2d1d00\x2d619b\x2d21abb7d25a94.mount: Deactivated successfully. May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7951] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" iface="eth0" netns="/var/run/netns/cni-c49c1a28-8e9f-fb43-9008-a5fd7025b6ce" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.150 [INFO][7951] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" iface="eth0" netns="/var/run/netns/cni-c49c1a28-8e9f-fb43-9008-a5fd7025b6ce" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.151 [INFO][7951] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" iface="eth0" netns="/var/run/netns/cni-c49c1a28-8e9f-fb43-9008-a5fd7025b6ce" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.151 [INFO][7951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.151 [INFO][7951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.167 [INFO][8018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.167 [INFO][8018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.181 [INFO][8018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.188 [WARNING][8018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.188 [INFO][8018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.189 [INFO][8018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.192707 containerd[2643]: 2025-05-17 00:14:57.191 [INFO][7951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:14:57.193015 containerd[2643]: time="2025-05-17T00:14:57.192863583Z" level=info msg="TearDown network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" successfully" May 17 00:14:57.193015 containerd[2643]: time="2025-05-17T00:14:57.192885183Z" level=info msg="StopPodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" returns successfully" May 17 00:14:57.193272 containerd[2643]: time="2025-05-17T00:14:57.193248783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-b5fcf,Uid:a41cd5df-5d9c-4907-bb35-9d4adffa8017,Namespace:calico-system,Attempt:1,}" May 17 00:14:57.194589 systemd[1]: run-netns-cni\x2dc49c1a28\x2d8e9f\x2dfb43\x2d9008\x2da5fd7025b6ce.mount: Deactivated successfully. May 17 00:14:57.244989 systemd-networkd[2553]: cali3c25fb25eca: Gained IPv6LL May 17 00:14:57.254976 systemd-networkd[2553]: cali56046756f14: Link UP May 17 00:14:57.255414 systemd-networkd[2553]: cali56046756f14: Gained carrier May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.197 [INFO][8069] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.207 [INFO][8069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0 calico-kube-controllers-7c6666cd8d- calico-system afa60deb-188d-4048-ba25-d7cae1d87a15 949 0 2025-05-17 00:14:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c6666cd8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 calico-kube-controllers-7c6666cd8d-5cpvh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali56046756f14 [] [] }} ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.207 [INFO][8069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.228 [INFO][8144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" HandleID="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.228 [INFO][8144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" HandleID="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003dff40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"calico-kube-controllers-7c6666cd8d-5cpvh", "timestamp":"2025-05-17 00:14:57.228231223 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.228 [INFO][8144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.228 [INFO][8144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.228 [INFO][8144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.236 [INFO][8144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.239 [INFO][8144] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.242 [INFO][8144] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.243 [INFO][8144] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.245 [INFO][8144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.245 [INFO][8144] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.246 [INFO][8144] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187 May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.248 [INFO][8144] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.252 [INFO][8144] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.69/26] block=192.168.69.64/26 handle="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.252 [INFO][8144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.69/26] handle="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.252 [INFO][8144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.264072 containerd[2643]: 2025-05-17 00:14:57.252 [INFO][8144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.69/26] IPv6=[] ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" HandleID="k8s-pod-network.0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.264589 containerd[2643]: 2025-05-17 00:14:57.253 [INFO][8069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0", GenerateName:"calico-kube-controllers-7c6666cd8d-", Namespace:"calico-system", SelfLink:"", UID:"afa60deb-188d-4048-ba25-d7cae1d87a15", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6666cd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"calico-kube-controllers-7c6666cd8d-5cpvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56046756f14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.264589 containerd[2643]: 2025-05-17 00:14:57.253 [INFO][8069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.69/32] ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.264589 containerd[2643]: 2025-05-17 00:14:57.253 [INFO][8069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56046756f14 ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.264589 containerd[2643]: 2025-05-17 00:14:57.255 [INFO][8069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.264589 containerd[2643]: 2025-05-17 00:14:57.255 [INFO][8069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0", GenerateName:"calico-kube-controllers-7c6666cd8d-", Namespace:"calico-system", SelfLink:"", UID:"afa60deb-188d-4048-ba25-d7cae1d87a15", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6666cd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187", Pod:"calico-kube-controllers-7c6666cd8d-5cpvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56046756f14", MAC:"fe:2a:44:3b:b4:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.264589 containerd[2643]: 2025-05-17 00:14:57.262 [INFO][8069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187" Namespace="calico-system" Pod="calico-kube-controllers-7c6666cd8d-5cpvh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:14:57.276220 containerd[2643]: time="2025-05-17T00:14:57.276164504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:57.276220 containerd[2643]: time="2025-05-17T00:14:57.276208984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:57.276220 containerd[2643]: time="2025-05-17T00:14:57.276220184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:57.276335 containerd[2643]: time="2025-05-17T00:14:57.276289624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:57.297073 systemd[1]: Started cri-containerd-0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187.scope - libcontainer container 0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187. May 17 00:14:57.320720 containerd[2643]: time="2025-05-17T00:14:57.320685705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c6666cd8d-5cpvh,Uid:afa60deb-188d-4048-ba25-d7cae1d87a15,Namespace:calico-system,Attempt:1,} returns sandbox id \"0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187\"" May 17 00:14:57.321761 containerd[2643]: time="2025-05-17T00:14:57.321742225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:14:57.367087 systemd-networkd[2553]: cali8fa1687beac: Link UP May 17 00:14:57.367966 systemd-networkd[2553]: cali8fa1687beac: Gained carrier May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.205 [INFO][8086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.220 [INFO][8086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0 coredns-668d6bf9bc- kube-system 2e72750b-2053-4375-95c0-ca47f0bf61d4 950 0 2025-05-17 00:14:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 coredns-668d6bf9bc-qstm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8fa1687beac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.220 [INFO][8086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.239 [INFO][8166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" HandleID="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.239 [INFO][8166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" HandleID="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"coredns-668d6bf9bc-qstm5", "timestamp":"2025-05-17 00:14:57.239681584 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.239 [INFO][8166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.252 [INFO][8166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.252 [INFO][8166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.337 [INFO][8166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.340 [INFO][8166] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.343 [INFO][8166] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.344 [INFO][8166] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.346 [INFO][8166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.346 [INFO][8166] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.347 [INFO][8166] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.360 [INFO][8166] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.364 [INFO][8166] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.70/26] block=192.168.69.64/26 handle="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.364 [INFO][8166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.70/26] handle="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.364 [INFO][8166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.375616 containerd[2643]: 2025-05-17 00:14:57.364 [INFO][8166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.70/26] IPv6=[] ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" HandleID="k8s-pod-network.7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.376055 containerd[2643]: 2025-05-17 00:14:57.365 [INFO][8086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e72750b-2053-4375-95c0-ca47f0bf61d4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"coredns-668d6bf9bc-qstm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fa1687beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.376055 containerd[2643]: 2025-05-17 00:14:57.365 [INFO][8086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.70/32] ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.376055 containerd[2643]: 2025-05-17 00:14:57.365 [INFO][8086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fa1687beac ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.376055 containerd[2643]: 2025-05-17 00:14:57.368 [INFO][8086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.376055 containerd[2643]: 2025-05-17 00:14:57.368 [INFO][8086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e72750b-2053-4375-95c0-ca47f0bf61d4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d", Pod:"coredns-668d6bf9bc-qstm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fa1687beac", MAC:"4a:f6:a3:ac:3d:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.376055 containerd[2643]: 2025-05-17 00:14:57.374 [INFO][8086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d" Namespace="kube-system" Pod="coredns-668d6bf9bc-qstm5" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:14:57.388246 containerd[2643]: time="2025-05-17T00:14:57.388179386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:57.388246 containerd[2643]: time="2025-05-17T00:14:57.388232506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:57.388315 containerd[2643]: time="2025-05-17T00:14:57.388243666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:57.388337 containerd[2643]: time="2025-05-17T00:14:57.388320626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:57.409087 systemd[1]: Started cri-containerd-7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d.scope - libcontainer container 7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d. May 17 00:14:57.431898 containerd[2643]: time="2025-05-17T00:14:57.431869307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qstm5,Uid:2e72750b-2053-4375-95c0-ca47f0bf61d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d\"" May 17 00:14:57.433749 containerd[2643]: time="2025-05-17T00:14:57.433725147Z" level=info msg="CreateContainer within sandbox \"7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:14:57.438937 containerd[2643]: time="2025-05-17T00:14:57.438907907Z" level=info msg="CreateContainer within sandbox \"7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a09fe39830d1bbf4ec8c5ced326060eb83cfe48cabc11fe28983fbbb07c10bc6\"" May 17 00:14:57.439266 containerd[2643]: time="2025-05-17T00:14:57.439240267Z" level=info msg="StartContainer for \"a09fe39830d1bbf4ec8c5ced326060eb83cfe48cabc11fe28983fbbb07c10bc6\"" May 17 00:14:57.457946 systemd-networkd[2553]: cali03ed998c88b: Link UP May 17 00:14:57.458198 systemd-networkd[2553]: cali03ed998c88b: Gained carrier May 17 00:14:57.463078 systemd[1]: Started cri-containerd-a09fe39830d1bbf4ec8c5ced326060eb83cfe48cabc11fe28983fbbb07c10bc6.scope - libcontainer container a09fe39830d1bbf4ec8c5ced326060eb83cfe48cabc11fe28983fbbb07c10bc6. May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.213 [INFO][8114] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.223 [INFO][8114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0 goldmane-78d55f7ddc- calico-system a41cd5df-5d9c-4907-bb35-9d4adffa8017 951 0 2025-05-17 00:14:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 goldmane-78d55f7ddc-b5fcf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali03ed998c88b [] [] }} ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.223 [INFO][8114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.243 [INFO][8176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" HandleID="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.243 [INFO][8176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" HandleID="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003dfd00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"goldmane-78d55f7ddc-b5fcf", "timestamp":"2025-05-17 00:14:57.243837864 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.244 [INFO][8176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.364 [INFO][8176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.364 [INFO][8176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.437 [INFO][8176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.440 [INFO][8176] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.443 [INFO][8176] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.444 [INFO][8176] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.446 [INFO][8176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.446 [INFO][8176] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.447 [INFO][8176] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6 May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.450 [INFO][8176] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.454 [INFO][8176] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.71/26] block=192.168.69.64/26 handle="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.454 [INFO][8176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.71/26] handle="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.454 [INFO][8176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.465811 containerd[2643]: 2025-05-17 00:14:57.454 [INFO][8176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.71/26] IPv6=[] ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" HandleID="k8s-pod-network.8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.466216 containerd[2643]: 2025-05-17 00:14:57.455 [INFO][8114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a41cd5df-5d9c-4907-bb35-9d4adffa8017", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"goldmane-78d55f7ddc-b5fcf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03ed998c88b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.466216 containerd[2643]: 2025-05-17 00:14:57.455 [INFO][8114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.71/32] ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.466216 containerd[2643]: 2025-05-17 00:14:57.456 [INFO][8114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03ed998c88b ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.466216 containerd[2643]: 2025-05-17 00:14:57.458 [INFO][8114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.466216 containerd[2643]: 2025-05-17 00:14:57.458 [INFO][8114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a41cd5df-5d9c-4907-bb35-9d4adffa8017", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6", Pod:"goldmane-78d55f7ddc-b5fcf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03ed998c88b", MAC:"be:d9:5f:07:9c:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.466216 containerd[2643]: 2025-05-17 00:14:57.464 [INFO][8114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6" Namespace="calico-system" Pod="goldmane-78d55f7ddc-b5fcf" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:14:57.478460 containerd[2643]: time="2025-05-17T00:14:57.478097588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:57.478460 containerd[2643]: time="2025-05-17T00:14:57.478450108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:57.478512 containerd[2643]: time="2025-05-17T00:14:57.478462188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:57.478552 containerd[2643]: time="2025-05-17T00:14:57.478534668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:57.490694 containerd[2643]: time="2025-05-17T00:14:57.490661908Z" level=info msg="StartContainer for \"a09fe39830d1bbf4ec8c5ced326060eb83cfe48cabc11fe28983fbbb07c10bc6\" returns successfully" May 17 00:14:57.505097 systemd[1]: Started cri-containerd-8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6.scope - libcontainer container 8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6. May 17 00:14:57.529174 containerd[2643]: time="2025-05-17T00:14:57.529136109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-b5fcf,Uid:a41cd5df-5d9c-4907-bb35-9d4adffa8017,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6\"" May 17 00:14:57.982239 containerd[2643]: time="2025-05-17T00:14:57.982195237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:57.982360 containerd[2643]: time="2025-05-17T00:14:57.982216437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 17 00:14:57.983013 containerd[2643]: time="2025-05-17T00:14:57.982991957Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:57.984833 containerd[2643]: time="2025-05-17T00:14:57.984802957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:57.985539 containerd[2643]: time="2025-05-17T00:14:57.985513317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 663.739692ms" May 17 00:14:57.985587 containerd[2643]: time="2025-05-17T00:14:57.985544117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 00:14:57.986275 containerd[2643]: time="2025-05-17T00:14:57.986253437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:14:57.991058 containerd[2643]: time="2025-05-17T00:14:57.991028517Z" level=info msg="CreateContainer within sandbox \"0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:14:57.995783 containerd[2643]: time="2025-05-17T00:14:57.995753357Z" level=info msg="CreateContainer within sandbox \"0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"071d38a781a6ca6bfedbbf111d7806f2e09f6bace543bcb3d75819761ff76ee2\"" May 17 00:14:57.996117 containerd[2643]: time="2025-05-17T00:14:57.996089797Z" level=info msg="StartContainer for \"071d38a781a6ca6bfedbbf111d7806f2e09f6bace543bcb3d75819761ff76ee2\"" May 17 00:14:58.009779 containerd[2643]: time="2025-05-17T00:14:58.009744038Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:58.010005 containerd[2643]: time="2025-05-17T00:14:58.009976598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:58.010065 containerd[2643]: time="2025-05-17T00:14:58.010037798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:14:58.010154 kubelet[4097]: E0517 00:14:58.010117 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:14:58.010401 kubelet[4097]: E0517 00:14:58.010163 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:14:58.010401 kubelet[4097]: E0517 00:14:58.010285 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:58.011399 kubelet[4097]: E0517 00:14:58.011369 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:14:58.024006 systemd[1]: Started cri-containerd-071d38a781a6ca6bfedbbf111d7806f2e09f6bace543bcb3d75819761ff76ee2.scope - libcontainer container 071d38a781a6ca6bfedbbf111d7806f2e09f6bace543bcb3d75819761ff76ee2. May 17 00:14:58.048205 containerd[2643]: time="2025-05-17T00:14:58.048173358Z" level=info msg="StartContainer for \"071d38a781a6ca6bfedbbf111d7806f2e09f6bace543bcb3d75819761ff76ee2\" returns successfully" May 17 00:14:58.111758 containerd[2643]: time="2025-05-17T00:14:58.111725799Z" level=info msg="StopPodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\"" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.148 [INFO][8544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.148 [INFO][8544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" iface="eth0" netns="/var/run/netns/cni-79cc02e3-b2b5-93f3-6091-3cb149b34c1c" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.148 [INFO][8544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" iface="eth0" netns="/var/run/netns/cni-79cc02e3-b2b5-93f3-6091-3cb149b34c1c" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.148 [INFO][8544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" iface="eth0" netns="/var/run/netns/cni-79cc02e3-b2b5-93f3-6091-3cb149b34c1c" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.148 [INFO][8544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.148 [INFO][8544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.166 [INFO][8575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.166 [INFO][8575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.166 [INFO][8575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.173 [WARNING][8575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.174 [INFO][8575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.175 [INFO][8575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.177577 containerd[2643]: 2025-05-17 00:14:58.176 [INFO][8544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:14:58.178122 containerd[2643]: time="2025-05-17T00:14:58.177759001Z" level=info msg="TearDown network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" successfully" May 17 00:14:58.178122 containerd[2643]: time="2025-05-17T00:14:58.177792401Z" level=info msg="StopPodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" returns successfully" May 17 00:14:58.178372 containerd[2643]: time="2025-05-17T00:14:58.178343881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-8bsxh,Uid:36199e3b-e706-4256-a6af-e8f5ec5ff5b0,Namespace:calico-apiserver,Attempt:1,}" May 17 00:14:58.190492 systemd[1]: run-netns-cni\x2d79cc02e3\x2db2b5\x2d93f3\x2d6091\x2d3cb149b34c1c.mount: Deactivated successfully. May 17 00:14:58.193047 kubelet[4097]: E0517 00:14:58.193014 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:14:58.199065 kubelet[4097]: I0517 00:14:58.199023 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qstm5" podStartSLOduration=34.199010201 podStartE2EDuration="34.199010201s" podCreationTimestamp="2025-05-17 00:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:58.198884121 +0000 UTC m=+41.159823276" watchObservedRunningTime="2025-05-17 00:14:58.199010201 +0000 UTC m=+41.159949356" May 17 00:14:58.206159 kubelet[4097]: I0517 00:14:58.206115 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c6666cd8d-5cpvh" podStartSLOduration=18.541506349 podStartE2EDuration="19.206102521s" podCreationTimestamp="2025-05-17 00:14:39 +0000 UTC" firstStartedPulling="2025-05-17 00:14:57.321547065 +0000 UTC m=+40.282486220" lastFinishedPulling="2025-05-17 00:14:57.986143237 +0000 UTC m=+40.947082392" observedRunningTime="2025-05-17 00:14:58.205634961 +0000 UTC m=+41.166574156" watchObservedRunningTime="2025-05-17 00:14:58.206102521 +0000 UTC m=+41.167041676" May 17 00:14:58.259125 systemd-networkd[2553]: cali05f03606d67: Link UP May 17 00:14:58.259325 systemd-networkd[2553]: cali05f03606d67: Gained carrier May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.199 [INFO][8590] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.209 [INFO][8590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0 calico-apiserver-847d49c9d7- calico-apiserver 36199e3b-e706-4256-a6af-e8f5ec5ff5b0 976 0 2025-05-17 00:14:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847d49c9d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-02409cc2a5 calico-apiserver-847d49c9d7-8bsxh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05f03606d67 [] [] }} ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.209 [INFO][8590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.230 [INFO][8620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" HandleID="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.230 [INFO][8620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" HandleID="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003df350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-02409cc2a5", "pod":"calico-apiserver-847d49c9d7-8bsxh", "timestamp":"2025-05-17 00:14:58.230825121 +0000 UTC"}, Hostname:"ci-4081.3.3-n-02409cc2a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.231 [INFO][8620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.231 [INFO][8620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.231 [INFO][8620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-02409cc2a5' May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.238 [INFO][8620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.241 [INFO][8620] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.244 [INFO][8620] ipam/ipam.go 511: Trying affinity for 192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.246 [INFO][8620] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.247 [INFO][8620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.64/26 host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.247 [INFO][8620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.64/26 handle="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.249 [INFO][8620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.251 [INFO][8620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.64/26 handle="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.256 [INFO][8620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.72/26] block=192.168.69.64/26 handle="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.256 [INFO][8620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.72/26] handle="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" host="ci-4081.3.3-n-02409cc2a5" May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.256 [INFO][8620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.267543 containerd[2643]: 2025-05-17 00:14:58.256 [INFO][8620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.72/26] IPv6=[] ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" HandleID="k8s-pod-network.c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.268072 containerd[2643]: 2025-05-17 00:14:58.257 [INFO][8590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"36199e3b-e706-4256-a6af-e8f5ec5ff5b0", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"", Pod:"calico-apiserver-847d49c9d7-8bsxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05f03606d67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.268072 containerd[2643]: 2025-05-17 00:14:58.257 [INFO][8590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.72/32] ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.268072 containerd[2643]: 2025-05-17 00:14:58.257 [INFO][8590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05f03606d67 ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.268072 containerd[2643]: 2025-05-17 00:14:58.259 [INFO][8590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.268072 containerd[2643]: 2025-05-17 00:14:58.260 [INFO][8590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"36199e3b-e706-4256-a6af-e8f5ec5ff5b0", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef", Pod:"calico-apiserver-847d49c9d7-8bsxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05f03606d67", MAC:"8e:06:d8:f9:26:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.268072 containerd[2643]: 2025-05-17 00:14:58.266 [INFO][8590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef" Namespace="calico-apiserver" Pod="calico-apiserver-847d49c9d7-8bsxh" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:14:58.279817 containerd[2643]: time="2025-05-17T00:14:58.279724122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:58.279817 containerd[2643]: time="2025-05-17T00:14:58.279805162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:58.279864 containerd[2643]: time="2025-05-17T00:14:58.279816562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:58.279948 containerd[2643]: time="2025-05-17T00:14:58.279929162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:58.308106 systemd[1]: Started cri-containerd-c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef.scope - libcontainer container c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef. May 17 00:14:58.331318 containerd[2643]: time="2025-05-17T00:14:58.331283083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d49c9d7-8bsxh,Uid:36199e3b-e706-4256-a6af-e8f5ec5ff5b0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef\"" May 17 00:14:58.333024 containerd[2643]: time="2025-05-17T00:14:58.333001083Z" level=info msg="CreateContainer within sandbox \"c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:14:58.337669 containerd[2643]: time="2025-05-17T00:14:58.337637923Z" level=info msg="CreateContainer within sandbox \"c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b8a6cbae6825ccb4becb9233df37ca010ec4889cd93f940f1f1f02d2c2bba7e5\"" May 17 00:14:58.338025 containerd[2643]: time="2025-05-17T00:14:58.337992163Z" level=info msg="StartContainer for \"b8a6cbae6825ccb4becb9233df37ca010ec4889cd93f940f1f1f02d2c2bba7e5\"" May 17 00:14:58.362008 systemd[1]: Started cri-containerd-b8a6cbae6825ccb4becb9233df37ca010ec4889cd93f940f1f1f02d2c2bba7e5.scope - libcontainer container b8a6cbae6825ccb4becb9233df37ca010ec4889cd93f940f1f1f02d2c2bba7e5. May 17 00:14:58.386330 containerd[2643]: time="2025-05-17T00:14:58.386304564Z" level=info msg="StartContainer for \"b8a6cbae6825ccb4becb9233df37ca010ec4889cd93f940f1f1f02d2c2bba7e5\" returns successfully" May 17 00:14:58.589073 systemd-networkd[2553]: cali8fa1687beac: Gained IPv6LL May 17 00:14:58.844968 systemd-networkd[2553]: cali56046756f14: Gained IPv6LL May 17 00:14:59.165016 systemd-networkd[2553]: cali03ed998c88b: Gained IPv6LL May 17 00:14:59.197072 kubelet[4097]: I0517 00:14:59.197037 4097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:14:59.197821 kubelet[4097]: E0517 00:14:59.197797 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:14:59.204670 kubelet[4097]: I0517 00:14:59.204634 4097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-847d49c9d7-8bsxh" podStartSLOduration=25.204622458 podStartE2EDuration="25.204622458s" podCreationTimestamp="2025-05-17 00:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:59.204276298 +0000 UTC m=+42.165215453" watchObservedRunningTime="2025-05-17 00:14:59.204622458 +0000 UTC m=+42.165561573" May 17 00:15:00.061004 systemd-networkd[2553]: cali05f03606d67: Gained IPv6LL May 17 00:15:00.198951 kubelet[4097]: I0517 00:15:00.198910 4097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:02.168604 systemd[1]: Started sshd@7-147.28.129.25:22-194.0.234.107:51078.service - OpenSSH per-connection server daemon (194.0.234.107:51078). May 17 00:15:02.812886 sshd[9046]: Invalid user from 194.0.234.107 port 51078 May 17 00:15:03.112257 containerd[2643]: time="2025-05-17T00:15:03.112183555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:03.142439 containerd[2643]: time="2025-05-17T00:15:03.142387715Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:03.146942 containerd[2643]: time="2025-05-17T00:15:03.146909715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:03.147019 containerd[2643]: time="2025-05-17T00:15:03.146983875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:03.147144 kubelet[4097]: E0517 00:15:03.147075 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:03.147365 kubelet[4097]: E0517 00:15:03.147146 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:03.147365 kubelet[4097]: E0517 00:15:03.147234 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d78f9656fa7f429e98165d5566619fe2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:03.148915 containerd[2643]: time="2025-05-17T00:15:03.148890675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:03.176694 containerd[2643]: time="2025-05-17T00:15:03.176661996Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:03.176921 containerd[2643]: time="2025-05-17T00:15:03.176890436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:03.177002 containerd[2643]: time="2025-05-17T00:15:03.176914716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:03.177079 kubelet[4097]: E0517 00:15:03.177049 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:03.177129 kubelet[4097]: E0517 00:15:03.177086 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:03.177203 kubelet[4097]: E0517 00:15:03.177169 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:03.179090 kubelet[4097]: E0517 00:15:03.179054 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:15:06.049171 kubelet[4097]: I0517 00:15:06.049125 4097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:06.799970 kernel: bpftool[9329]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:15:06.964145 systemd-networkd[2553]: vxlan.calico: Link UP May 17 00:15:06.964151 systemd-networkd[2553]: vxlan.calico: Gained carrier May 17 00:15:08.381018 systemd-networkd[2553]: vxlan.calico: Gained IPv6LL May 17 00:15:10.112111 containerd[2643]: time="2025-05-17T00:15:10.112063547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:15:10.142800 containerd[2643]: time="2025-05-17T00:15:10.142755187Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:10.143185 containerd[2643]: time="2025-05-17T00:15:10.143109907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:10.143252 containerd[2643]: time="2025-05-17T00:15:10.143182587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:15:10.143361 kubelet[4097]: E0517 00:15:10.143253 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:10.143361 kubelet[4097]: E0517 00:15:10.143316 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:10.143662 kubelet[4097]: E0517 00:15:10.143422 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:10.144590 kubelet[4097]: E0517 00:15:10.144565 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:15:12.194331 sshd[9046]: Connection closed by invalid user 194.0.234.107 port 51078 [preauth] May 17 00:15:12.196302 systemd[1]: sshd@7-147.28.129.25:22-194.0.234.107:51078.service: Deactivated successfully. May 17 00:15:16.113172 kubelet[4097]: E0517 00:15:16.113123 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:15:16.625982 kubelet[4097]: I0517 00:15:16.625934 4097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:17.104480 containerd[2643]: time="2025-05-17T00:15:17.104442152Z" level=info msg="StopPodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\"" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.134 [WARNING][9615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67172858-15b2-4ceb-9630-af18b81413de", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6", Pod:"csi-node-driver-b2ztl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c25fb25eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.134 [INFO][9615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.135 [INFO][9615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" iface="eth0" netns="" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.135 [INFO][9615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.135 [INFO][9615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.152 [INFO][9637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.152 [INFO][9637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.153 [INFO][9637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.160 [WARNING][9637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.160 [INFO][9637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.161 [INFO][9637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.164320 containerd[2643]: 2025-05-17 00:15:17.162 [INFO][9615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.164646 containerd[2643]: time="2025-05-17T00:15:17.164370073Z" level=info msg="TearDown network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" successfully" May 17 00:15:17.164646 containerd[2643]: time="2025-05-17T00:15:17.164405793Z" level=info msg="StopPodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" returns successfully" May 17 00:15:17.164820 containerd[2643]: time="2025-05-17T00:15:17.164794193Z" level=info msg="RemovePodSandbox for \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\"" May 17 00:15:17.164846 containerd[2643]: time="2025-05-17T00:15:17.164829953Z" level=info msg="Forcibly stopping sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\"" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.194 [WARNING][9666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67172858-15b2-4ceb-9630-af18b81413de", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"e57176abde38ce0aa27d171d0dfd699c5b7bf3346625cc2ec438367e3add10b6", Pod:"csi-node-driver-b2ztl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c25fb25eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.195 [INFO][9666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.195 [INFO][9666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" iface="eth0" netns="" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.195 [INFO][9666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.195 [INFO][9666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.212 [INFO][9705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.212 [INFO][9705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.212 [INFO][9705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.219 [WARNING][9705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.219 [INFO][9705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" HandleID="k8s-pod-network.7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" Workload="ci--4081.3.3--n--02409cc2a5-k8s-csi--node--driver--b2ztl-eth0" May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.220 [INFO][9705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.222880 containerd[2643]: 2025-05-17 00:15:17.221 [INFO][9666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100" May 17 00:15:17.223246 containerd[2643]: time="2025-05-17T00:15:17.222934153Z" level=info msg="TearDown network for sandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" successfully" May 17 00:15:17.224536 containerd[2643]: time="2025-05-17T00:15:17.224509473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.224583 containerd[2643]: time="2025-05-17T00:15:17.224571633Z" level=info msg="RemovePodSandbox \"7a706750366a28ca9557aca7153ddc77416ce083dfcafd7513af2ae42e6ce100\" returns successfully" May 17 00:15:17.225041 containerd[2643]: time="2025-05-17T00:15:17.225017833Z" level=info msg="StopPodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\"" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.253 [WARNING][9751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a41cd5df-5d9c-4907-bb35-9d4adffa8017", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6", Pod:"goldmane-78d55f7ddc-b5fcf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03ed998c88b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.254 [INFO][9751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.254 [INFO][9751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" iface="eth0" netns="" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.254 [INFO][9751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.254 [INFO][9751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.271 [INFO][9780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.272 [INFO][9780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.272 [INFO][9780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.279 [WARNING][9780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.279 [INFO][9780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.280 [INFO][9780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.282857 containerd[2643]: 2025-05-17 00:15:17.281 [INFO][9751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.283161 containerd[2643]: time="2025-05-17T00:15:17.282914633Z" level=info msg="TearDown network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" successfully" May 17 00:15:17.283161 containerd[2643]: time="2025-05-17T00:15:17.282939793Z" level=info msg="StopPodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" returns successfully" May 17 00:15:17.283241 containerd[2643]: time="2025-05-17T00:15:17.283213793Z" level=info msg="RemovePodSandbox for \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\"" May 17 00:15:17.283267 containerd[2643]: time="2025-05-17T00:15:17.283246633Z" level=info msg="Forcibly stopping sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\"" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.313 [WARNING][9813] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a41cd5df-5d9c-4907-bb35-9d4adffa8017", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"8e0b2a6014260df1899ed2ee57bbd603134c690826f74d4f7089604a7a8ea9b6", Pod:"goldmane-78d55f7ddc-b5fcf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03ed998c88b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.313 [INFO][9813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.313 [INFO][9813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" iface="eth0" netns="" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.313 [INFO][9813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.313 [INFO][9813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.330 [INFO][9835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.331 [INFO][9835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.331 [INFO][9835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.338 [WARNING][9835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.338 [INFO][9835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" HandleID="k8s-pod-network.e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" Workload="ci--4081.3.3--n--02409cc2a5-k8s-goldmane--78d55f7ddc--b5fcf-eth0" May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.339 [INFO][9835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.342248 containerd[2643]: 2025-05-17 00:15:17.340 [INFO][9813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b" May 17 00:15:17.342521 containerd[2643]: time="2025-05-17T00:15:17.342288154Z" level=info msg="TearDown network for sandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" successfully" May 17 00:15:17.343897 containerd[2643]: time="2025-05-17T00:15:17.343864514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.343952 containerd[2643]: time="2025-05-17T00:15:17.343927874Z" level=info msg="RemovePodSandbox \"e600c3fe2f8710e41445781981640a5ed7ac675e5d31a2c1318ccda6acf9030b\" returns successfully" May 17 00:15:17.344303 containerd[2643]: time="2025-05-17T00:15:17.344279314Z" level=info msg="StopPodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\"" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.374 [WARNING][9858] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0", GenerateName:"calico-kube-controllers-7c6666cd8d-", Namespace:"calico-system", SelfLink:"", UID:"afa60deb-188d-4048-ba25-d7cae1d87a15", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6666cd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187", Pod:"calico-kube-controllers-7c6666cd8d-5cpvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56046756f14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.374 [INFO][9858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.374 [INFO][9858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" iface="eth0" netns="" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.374 [INFO][9858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.374 [INFO][9858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.392 [INFO][9880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.392 [INFO][9880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.392 [INFO][9880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.399 [WARNING][9880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.399 [INFO][9880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.401 [INFO][9880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.403830 containerd[2643]: 2025-05-17 00:15:17.402 [INFO][9858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.403830 containerd[2643]: time="2025-05-17T00:15:17.403815674Z" level=info msg="TearDown network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" successfully" May 17 00:15:17.404264 containerd[2643]: time="2025-05-17T00:15:17.403835114Z" level=info msg="StopPodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" returns successfully" May 17 00:15:17.404325 containerd[2643]: time="2025-05-17T00:15:17.404301874Z" level=info msg="RemovePodSandbox for \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\"" May 17 00:15:17.404352 containerd[2643]: time="2025-05-17T00:15:17.404332234Z" level=info msg="Forcibly stopping sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\"" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.434 [WARNING][9910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0", GenerateName:"calico-kube-controllers-7c6666cd8d-", Namespace:"calico-system", SelfLink:"", UID:"afa60deb-188d-4048-ba25-d7cae1d87a15", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c6666cd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"0789582964ff335bf70151ce6a9ada437d9270d82b386e7a08657bb80eb0c187", Pod:"calico-kube-controllers-7c6666cd8d-5cpvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56046756f14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.434 [INFO][9910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.434 [INFO][9910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" iface="eth0" netns="" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.434 [INFO][9910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.434 [INFO][9910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.451 [INFO][9927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.451 [INFO][9927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.451 [INFO][9927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.458 [WARNING][9927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.458 [INFO][9927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" HandleID="k8s-pod-network.cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--kube--controllers--7c6666cd8d--5cpvh-eth0" May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.459 [INFO][9927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.462311 containerd[2643]: 2025-05-17 00:15:17.460 [INFO][9910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6" May 17 00:15:17.462689 containerd[2643]: time="2025-05-17T00:15:17.462337194Z" level=info msg="TearDown network for sandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" successfully" May 17 00:15:17.463884 containerd[2643]: time="2025-05-17T00:15:17.463855674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.463937 containerd[2643]: time="2025-05-17T00:15:17.463922114Z" level=info msg="RemovePodSandbox \"cc331120ce2f5e94d7d0e98bcfcac2f56d090dafdc4abe6e6c145d84070d26b6\" returns successfully" May 17 00:15:17.464228 containerd[2643]: time="2025-05-17T00:15:17.464203554Z" level=info msg="StopPodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\"" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.497 [WARNING][9958] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"36199e3b-e706-4256-a6af-e8f5ec5ff5b0", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef", Pod:"calico-apiserver-847d49c9d7-8bsxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05f03606d67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.497 [INFO][9958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.497 [INFO][9958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" iface="eth0" netns="" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.497 [INFO][9958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.497 [INFO][9958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.514 [INFO][9978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.514 [INFO][9978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.514 [INFO][9978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.522 [WARNING][9978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.522 [INFO][9978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.523 [INFO][9978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.525684 containerd[2643]: 2025-05-17 00:15:17.524 [INFO][9958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.526073 containerd[2643]: time="2025-05-17T00:15:17.525709635Z" level=info msg="TearDown network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" successfully" May 17 00:15:17.526073 containerd[2643]: time="2025-05-17T00:15:17.525729555Z" level=info msg="StopPodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" returns successfully" May 17 00:15:17.526073 containerd[2643]: time="2025-05-17T00:15:17.526053755Z" level=info msg="RemovePodSandbox for \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\"" May 17 00:15:17.526133 containerd[2643]: time="2025-05-17T00:15:17.526084275Z" level=info msg="Forcibly stopping sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\"" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.556 [WARNING][10006] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"36199e3b-e706-4256-a6af-e8f5ec5ff5b0", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"c08f1aede875193349bb60e15f78ddc70b41ee240a5d559315ff1f9c490514ef", Pod:"calico-apiserver-847d49c9d7-8bsxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05f03606d67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.557 [INFO][10006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.557 [INFO][10006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" iface="eth0" netns="" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.557 [INFO][10006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.557 [INFO][10006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.574 [INFO][10026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.575 [INFO][10026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.575 [INFO][10026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.583 [WARNING][10026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.583 [INFO][10026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" HandleID="k8s-pod-network.864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--8bsxh-eth0" May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.584 [INFO][10026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.587230 containerd[2643]: 2025-05-17 00:15:17.585 [INFO][10006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409" May 17 00:15:17.587571 containerd[2643]: time="2025-05-17T00:15:17.587274035Z" level=info msg="TearDown network for sandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" successfully" May 17 00:15:17.588897 containerd[2643]: time="2025-05-17T00:15:17.588865635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.588943 containerd[2643]: time="2025-05-17T00:15:17.588929715Z" level=info msg="RemovePodSandbox \"864dcdc90a6545da7e5b0668a6cb7712c584f3ba5679f16bbad87010ccb07409\" returns successfully" May 17 00:15:17.589281 containerd[2643]: time="2025-05-17T00:15:17.589256675Z" level=info msg="StopPodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\"" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.619 [WARNING][10058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"10344652-ca68-479a-86f6-162b29976180", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d", Pod:"calico-apiserver-847d49c9d7-2j2bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e12b0e593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.619 [INFO][10058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.619 [INFO][10058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" iface="eth0" netns="" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.619 [INFO][10058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.619 [INFO][10058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.636 [INFO][10077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.636 [INFO][10077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.636 [INFO][10077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.644 [WARNING][10077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.644 [INFO][10077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.645 [INFO][10077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.647826 containerd[2643]: 2025-05-17 00:15:17.646 [INFO][10058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.648263 containerd[2643]: time="2025-05-17T00:15:17.647855155Z" level=info msg="TearDown network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" successfully" May 17 00:15:17.648263 containerd[2643]: time="2025-05-17T00:15:17.647878275Z" level=info msg="StopPodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" returns successfully" May 17 00:15:17.648263 containerd[2643]: time="2025-05-17T00:15:17.648220675Z" level=info msg="RemovePodSandbox for \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\"" May 17 00:15:17.648263 containerd[2643]: time="2025-05-17T00:15:17.648251555Z" level=info msg="Forcibly stopping sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\"" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.679 [WARNING][10107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0", GenerateName:"calico-apiserver-847d49c9d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"10344652-ca68-479a-86f6-162b29976180", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d49c9d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"e75c1047c13fa668822b98d54a33310bdf9f287aed7610d60642692b5774779d", Pod:"calico-apiserver-847d49c9d7-2j2bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3e12b0e593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.679 [INFO][10107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.679 [INFO][10107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" iface="eth0" netns="" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.679 [INFO][10107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.679 [INFO][10107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.696 [INFO][10127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.696 [INFO][10127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.696 [INFO][10127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.704 [WARNING][10127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.704 [INFO][10127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" HandleID="k8s-pod-network.b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" Workload="ci--4081.3.3--n--02409cc2a5-k8s-calico--apiserver--847d49c9d7--2j2bt-eth0" May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.705 [INFO][10127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.707912 containerd[2643]: 2025-05-17 00:15:17.706 [INFO][10107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76" May 17 00:15:17.708295 containerd[2643]: time="2025-05-17T00:15:17.707916915Z" level=info msg="TearDown network for sandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" successfully" May 17 00:15:17.709469 containerd[2643]: time="2025-05-17T00:15:17.709414035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.709564 containerd[2643]: time="2025-05-17T00:15:17.709476475Z" level=info msg="RemovePodSandbox \"b2f68d867df1e4a449ee8c1e67511b4e5e526921316f5baca27fbb4a4b030d76\" returns successfully" May 17 00:15:17.710109 containerd[2643]: time="2025-05-17T00:15:17.709849075Z" level=info msg="StopPodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\"" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.739 [WARNING][10157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e72750b-2053-4375-95c0-ca47f0bf61d4", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d", Pod:"coredns-668d6bf9bc-qstm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fa1687beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.740 [INFO][10157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.740 [INFO][10157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" iface="eth0" netns="" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.740 [INFO][10157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.740 [INFO][10157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.757 [INFO][10177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.757 [INFO][10177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.757 [INFO][10177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.764 [WARNING][10177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.765 [INFO][10177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.766 [INFO][10177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.768382 containerd[2643]: 2025-05-17 00:15:17.767 [INFO][10157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.768845 containerd[2643]: time="2025-05-17T00:15:17.768400356Z" level=info msg="TearDown network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" successfully" May 17 00:15:17.768845 containerd[2643]: time="2025-05-17T00:15:17.768426116Z" level=info msg="StopPodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" returns successfully" May 17 00:15:17.768845 containerd[2643]: time="2025-05-17T00:15:17.768728476Z" level=info msg="RemovePodSandbox for \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\"" May 17 00:15:17.768845 containerd[2643]: time="2025-05-17T00:15:17.768755156Z" level=info msg="Forcibly stopping sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\"" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.799 [WARNING][10208] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e72750b-2053-4375-95c0-ca47f0bf61d4", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"7470b27067f238c2e59f8e12f98814768d1932a175fc0f05479981b04d74604d", Pod:"coredns-668d6bf9bc-qstm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fa1687beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.799 [INFO][10208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.799 [INFO][10208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" iface="eth0" netns="" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.799 [INFO][10208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.799 [INFO][10208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.816 [INFO][10231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.816 [INFO][10231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.816 [INFO][10231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.824 [WARNING][10231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.824 [INFO][10231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" HandleID="k8s-pod-network.a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--qstm5-eth0" May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.825 [INFO][10231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.828008 containerd[2643]: 2025-05-17 00:15:17.826 [INFO][10208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73" May 17 00:15:17.828320 containerd[2643]: time="2025-05-17T00:15:17.828034396Z" level=info msg="TearDown network for sandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" successfully" May 17 00:15:17.829538 containerd[2643]: time="2025-05-17T00:15:17.829509716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.829577 containerd[2643]: time="2025-05-17T00:15:17.829562876Z" level=info msg="RemovePodSandbox \"a6960ca7295fe07ed584623c2cf03ccbf7d5c8483510febf8d28d6561e03dd73\" returns successfully" May 17 00:15:17.829967 containerd[2643]: time="2025-05-17T00:15:17.829938756Z" level=info msg="StopPodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\"" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.859 [WARNING][10260] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.859 [INFO][10260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.859 [INFO][10260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" iface="eth0" netns="" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.859 [INFO][10260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.859 [INFO][10260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.875 [INFO][10281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.875 [INFO][10281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.875 [INFO][10281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.883 [WARNING][10281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.883 [INFO][10281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.884 [INFO][10281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.886622 containerd[2643]: 2025-05-17 00:15:17.885 [INFO][10260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.886851 containerd[2643]: time="2025-05-17T00:15:17.886669876Z" level=info msg="TearDown network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" successfully" May 17 00:15:17.886851 containerd[2643]: time="2025-05-17T00:15:17.886699716Z" level=info msg="StopPodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" returns successfully" May 17 00:15:17.887113 containerd[2643]: time="2025-05-17T00:15:17.887087316Z" level=info msg="RemovePodSandbox for \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\"" May 17 00:15:17.887146 containerd[2643]: time="2025-05-17T00:15:17.887118996Z" level=info msg="Forcibly stopping sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\"" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.915 [WARNING][10310] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" WorkloadEndpoint="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.915 [INFO][10310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.915 [INFO][10310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" iface="eth0" netns="" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.915 [INFO][10310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.915 [INFO][10310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.933 [INFO][10331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.933 [INFO][10331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.933 [INFO][10331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.940 [WARNING][10331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.940 [INFO][10331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" HandleID="k8s-pod-network.64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" Workload="ci--4081.3.3--n--02409cc2a5-k8s-whisker--76b45cfbd--l29zd-eth0" May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.941 [INFO][10331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:17.943873 containerd[2643]: 2025-05-17 00:15:17.942 [INFO][10310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a" May 17 00:15:17.944108 containerd[2643]: time="2025-05-17T00:15:17.943935157Z" level=info msg="TearDown network for sandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" successfully" May 17 00:15:17.945439 containerd[2643]: time="2025-05-17T00:15:17.945410877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:17.945479 containerd[2643]: time="2025-05-17T00:15:17.945465437Z" level=info msg="RemovePodSandbox \"64daefdae5b7fb010f68388582bd6ea9e56158d7e4571d7cae18496746041d2a\" returns successfully" May 17 00:15:17.945795 containerd[2643]: time="2025-05-17T00:15:17.945770877Z" level=info msg="StopPodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\"" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.976 [WARNING][10364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c69e66b4-623c-48d0-af99-ebae9f6f8a0f", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1", Pod:"coredns-668d6bf9bc-t2mhc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26e9a95076b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.977 [INFO][10364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.977 [INFO][10364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" iface="eth0" netns="" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.977 [INFO][10364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.977 [INFO][10364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.994 [INFO][10384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.995 [INFO][10384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:17.995 [INFO][10384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:18.002 [WARNING][10384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:18.002 [INFO][10384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:18.003 [INFO][10384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:18.005758 containerd[2643]: 2025-05-17 00:15:18.004 [INFO][10364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.006134 containerd[2643]: time="2025-05-17T00:15:18.005757637Z" level=info msg="TearDown network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" successfully" May 17 00:15:18.006134 containerd[2643]: time="2025-05-17T00:15:18.005785517Z" level=info msg="StopPodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" returns successfully" May 17 00:15:18.006246 containerd[2643]: time="2025-05-17T00:15:18.006219957Z" level=info msg="RemovePodSandbox for \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\"" May 17 00:15:18.006273 containerd[2643]: time="2025-05-17T00:15:18.006254397Z" level=info msg="Forcibly stopping sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\"" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.036 [WARNING][10413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c69e66b4-623c-48d0-af99-ebae9f6f8a0f", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-02409cc2a5", ContainerID:"f7ce0798c4b2207e64c8747ca5abce7cdb2bc25e350bbc365f766697fcb28ea1", Pod:"coredns-668d6bf9bc-t2mhc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26e9a95076b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.036 [INFO][10413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.036 [INFO][10413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" iface="eth0" netns="" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.036 [INFO][10413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.036 [INFO][10413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.053 [INFO][10437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.053 [INFO][10437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.053 [INFO][10437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.063 [WARNING][10437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.063 [INFO][10437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" HandleID="k8s-pod-network.1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" Workload="ci--4081.3.3--n--02409cc2a5-k8s-coredns--668d6bf9bc--t2mhc-eth0" May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.064 [INFO][10437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:15:18.066804 containerd[2643]: 2025-05-17 00:15:18.065 [INFO][10413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1" May 17 00:15:18.067195 containerd[2643]: time="2025-05-17T00:15:18.066834157Z" level=info msg="TearDown network for sandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" successfully" May 17 00:15:18.068465 containerd[2643]: time="2025-05-17T00:15:18.068435597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:15:18.068507 containerd[2643]: time="2025-05-17T00:15:18.068491037Z" level=info msg="RemovePodSandbox \"1ef3c369316b0638caed6e8709a66fd51654c9c73969209fd3be1ff36a5d73e1\" returns successfully" May 17 00:15:24.111856 kubelet[4097]: E0517 00:15:24.111808 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:15:26.369559 kubelet[4097]: I0517 00:15:26.369517 4097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:15:28.112906 containerd[2643]: time="2025-05-17T00:15:28.112868113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:28.153764 containerd[2643]: time="2025-05-17T00:15:28.153665633Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:28.161466 containerd[2643]: time="2025-05-17T00:15:28.161430553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:28.161548 containerd[2643]: time="2025-05-17T00:15:28.161514593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:28.161650 kubelet[4097]: E0517 00:15:28.161616 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:28.161908 kubelet[4097]: E0517 00:15:28.161665 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:28.161908 kubelet[4097]: E0517 00:15:28.161791 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d78f9656fa7f429e98165d5566619fe2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:28.164224 containerd[2643]: time="2025-05-17T00:15:28.164207313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:28.188178 containerd[2643]: time="2025-05-17T00:15:28.188153193Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:28.188371 containerd[2643]: time="2025-05-17T00:15:28.188342593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:28.188423 containerd[2643]: time="2025-05-17T00:15:28.188410393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:28.188542 kubelet[4097]: E0517 00:15:28.188500 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:28.188594 kubelet[4097]: E0517 00:15:28.188552 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:28.188728 kubelet[4097]: E0517 00:15:28.188669 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:28.190716 kubelet[4097]: E0517 00:15:28.190666 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:15:39.112769 containerd[2643]: time="2025-05-17T00:15:39.112683032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:15:39.139744 containerd[2643]: time="2025-05-17T00:15:39.139695358Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:39.139978 containerd[2643]: time="2025-05-17T00:15:39.139955001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:39.140062 containerd[2643]: time="2025-05-17T00:15:39.140033402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:15:39.140171 kubelet[4097]: E0517 00:15:39.140125 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:39.140416 kubelet[4097]: E0517 00:15:39.140180 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:39.140416 kubelet[4097]: E0517 00:15:39.140301 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:39.141450 kubelet[4097]: E0517 00:15:39.141430 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:15:44.112798 kubelet[4097]: E0517 00:15:44.112730 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:15:45.193315 systemd[1]: Started sshd@8-147.28.129.25:22-218.92.0.158:26472.service - OpenSSH per-connection server daemon (218.92.0.158:26472). May 17 00:15:46.774064 sshd[10541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:15:48.721139 sshd[10539]: PAM: Permission denied for root from 218.92.0.158 May 17 00:15:49.146371 sshd[10584]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:15:50.112493 kubelet[4097]: E0517 00:15:50.112447 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:15:50.701846 sshd[10539]: PAM: Permission denied for root from 218.92.0.158 May 17 00:15:51.127267 sshd[10585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:15:53.290009 sshd[10539]: PAM: Permission denied for root from 218.92.0.158 May 17 00:15:53.502773 sshd[10539]: Received disconnect from 218.92.0.158 port 26472:11: [preauth] May 17 00:15:53.502773 sshd[10539]: Disconnected from authenticating user root 218.92.0.158 port 26472 [preauth] May 17 00:15:53.504556 systemd[1]: sshd@8-147.28.129.25:22-218.92.0.158:26472.service: Deactivated successfully. May 17 00:15:55.120398 kubelet[4097]: E0517 00:15:55.120326 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:16:01.114875 kubelet[4097]: E0517 00:16:01.114834 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:16:09.113119 containerd[2643]: time="2025-05-17T00:16:09.113019091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:16:09.137675 containerd[2643]: time="2025-05-17T00:16:09.137622453Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:09.138066 containerd[2643]: time="2025-05-17T00:16:09.137986055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:09.138066 containerd[2643]: time="2025-05-17T00:16:09.138048375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:16:09.138158 kubelet[4097]: E0517 00:16:09.138119 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:16:09.138427 kubelet[4097]: E0517 00:16:09.138160 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:16:09.138427 kubelet[4097]: E0517 00:16:09.138248 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d78f9656fa7f429e98165d5566619fe2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:09.139923 containerd[2643]: time="2025-05-17T00:16:09.139898265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:16:09.163061 containerd[2643]: time="2025-05-17T00:16:09.162975459Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:09.163252 containerd[2643]: time="2025-05-17T00:16:09.163228780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:09.163316 containerd[2643]: time="2025-05-17T00:16:09.163295501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:16:09.163390 kubelet[4097]: E0517 00:16:09.163361 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:16:09.163455 kubelet[4097]: E0517 00:16:09.163398 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:16:09.163518 kubelet[4097]: E0517 00:16:09.163480 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:09.164663 kubelet[4097]: E0517 00:16:09.164625 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:16:15.112407 kubelet[4097]: E0517 00:16:15.112357 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:16:24.112412 kubelet[4097]: E0517 00:16:24.112361 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:16:29.114530 containerd[2643]: time="2025-05-17T00:16:29.114488551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:16:29.151177 containerd[2643]: time="2025-05-17T00:16:29.151132512Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:29.151444 containerd[2643]: time="2025-05-17T00:16:29.151413553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:29.151502 containerd[2643]: time="2025-05-17T00:16:29.151475273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:16:29.151591 kubelet[4097]: E0517 00:16:29.151558 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:16:29.151813 kubelet[4097]: E0517 00:16:29.151601 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:16:29.151813 kubelet[4097]: E0517 00:16:29.151753 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:29.153741 kubelet[4097]: E0517 00:16:29.153712 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:16:36.112657 kubelet[4097]: E0517 00:16:36.112605 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:16:42.112681 kubelet[4097]: E0517 00:16:42.112578 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:16:49.113049 kubelet[4097]: E0517 00:16:49.112996 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:16:57.112235 kubelet[4097]: E0517 00:16:57.112185 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:17:01.114598 kubelet[4097]: E0517 00:17:01.114534 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:17:11.112542 kubelet[4097]: E0517 00:17:11.112490 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:17:15.113111 kubelet[4097]: E0517 00:17:15.113058 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:17:26.112176 kubelet[4097]: E0517 00:17:26.112091 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:17:26.112598 kubelet[4097]: E0517 00:17:26.112493 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:17:39.112054 kubelet[4097]: E0517 00:17:39.112003 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:17:39.112462 containerd[2643]: time="2025-05-17T00:17:39.112096649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:17:39.236906 containerd[2643]: time="2025-05-17T00:17:39.236842293Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:39.237151 containerd[2643]: time="2025-05-17T00:17:39.237120254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:39.237220 containerd[2643]: time="2025-05-17T00:17:39.237194294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:17:39.237358 kubelet[4097]: E0517 00:17:39.237306 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:39.237476 kubelet[4097]: E0517 00:17:39.237372 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:39.237557 kubelet[4097]: E0517 00:17:39.237502 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d78f9656fa7f429e98165d5566619fe2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:39.239197 containerd[2643]: time="2025-05-17T00:17:39.239178977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:17:39.278606 containerd[2643]: time="2025-05-17T00:17:39.278565481Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:39.293736 containerd[2643]: time="2025-05-17T00:17:39.293694066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:39.293805 containerd[2643]: time="2025-05-17T00:17:39.293770426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:17:39.293923 kubelet[4097]: E0517 00:17:39.293883 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:39.293968 kubelet[4097]: E0517 00:17:39.293927 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:39.294045 kubelet[4097]: E0517 00:17:39.294010 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:39.296039 kubelet[4097]: E0517 00:17:39.296005 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:17:50.112830 containerd[2643]: time="2025-05-17T00:17:50.112785961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:50.145491 containerd[2643]: time="2025-05-17T00:17:50.145412165Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:50.145795 containerd[2643]: time="2025-05-17T00:17:50.145755611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:50.145869 containerd[2643]: time="2025-05-17T00:17:50.145836332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:50.145982 kubelet[4097]: E0517 00:17:50.145933 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:50.146301 kubelet[4097]: E0517 00:17:50.145988 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:50.146301 kubelet[4097]: E0517 00:17:50.146124 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:50.147298 kubelet[4097]: E0517 00:17:50.147276 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:17:52.112336 kubelet[4097]: E0517 00:17:52.112281 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:18:03.114206 kubelet[4097]: E0517 00:18:03.114144 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:18:06.112935 kubelet[4097]: E0517 00:18:06.112869 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:18:07.868380 systemd[1]: Started sshd@9-147.28.129.25:22-218.92.0.158:15509.service - OpenSSH per-connection server daemon (218.92.0.158:15509). May 17 00:18:09.398191 sshd[10966]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:18:11.310362 sshd[10964]: PAM: Permission denied for root from 218.92.0.158 May 17 00:18:11.720802 sshd[10967]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:18:14.112248 kubelet[4097]: E0517 00:18:14.112205 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:18:14.575754 sshd[10964]: PAM: Permission denied for root from 218.92.0.158 May 17 00:18:14.986426 sshd[10968]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:18:16.918282 sshd[10964]: PAM: Permission denied for root from 218.92.0.158 May 17 00:18:17.123670 sshd[10964]: Received disconnect from 218.92.0.158 port 15509:11: [preauth] May 17 00:18:17.123670 sshd[10964]: Disconnected from authenticating user root 218.92.0.158 port 15509 [preauth] May 17 00:18:17.125361 systemd[1]: sshd@9-147.28.129.25:22-218.92.0.158:15509.service: Deactivated successfully. May 17 00:18:19.113234 kubelet[4097]: E0517 00:18:19.113185 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:18:26.112833 kubelet[4097]: E0517 00:18:26.112759 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:18:30.112631 kubelet[4097]: E0517 00:18:30.112571 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:18:37.113047 kubelet[4097]: E0517 00:18:37.112995 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:18:44.112942 kubelet[4097]: E0517 00:18:44.112875 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:18:52.112090 kubelet[4097]: E0517 00:18:52.112044 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:18:56.112875 kubelet[4097]: E0517 00:18:56.112829 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:19:06.111871 kubelet[4097]: E0517 00:19:06.111829 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:19:10.112753 kubelet[4097]: E0517 00:19:10.112699 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:19:14.386466 update_engine[2638]: I20250517 00:19:14.386409 2638 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.386862 2638 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387087 2638 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387399 2638 omaha_request_params.cc:62] Current group set to lts May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387481 2638 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387490 2638 update_attempter.cc:643] Scheduling an action processor start. May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387506 2638 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387532 2638 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387579 2638 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387588 2638 omaha_request_action.cc:272] Request: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: May 17 00:19:14.387619 update_engine[2638]: I20250517 00:19:14.387593 2638 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:14.388020 locksmithd[2664]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:19:14.388540 update_engine[2638]: I20250517 00:19:14.388516 2638 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:14.388872 update_engine[2638]: I20250517 00:19:14.388846 2638 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:14.389426 update_engine[2638]: E20250517 00:19:14.389402 2638 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:14.389543 update_engine[2638]: I20250517 00:19:14.389525 2638 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:19:21.112568 kubelet[4097]: E0517 00:19:21.112509 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:19:21.113026 kubelet[4097]: E0517 00:19:21.112791 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:19:24.296852 update_engine[2638]: I20250517 00:19:24.296514 2638 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:24.296852 update_engine[2638]: I20250517 00:19:24.296725 2638 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:24.297206 update_engine[2638]: I20250517 00:19:24.296936 2638 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:24.297583 update_engine[2638]: E20250517 00:19:24.297557 2638 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:24.297719 update_engine[2638]: I20250517 00:19:24.297699 2638 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:19:32.112457 kubelet[4097]: E0517 00:19:32.112392 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:19:34.296009 update_engine[2638]: I20250517 00:19:34.295924 2638 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:34.296321 update_engine[2638]: I20250517 00:19:34.296162 2638 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:34.296368 update_engine[2638]: I20250517 00:19:34.296341 2638 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:34.296995 update_engine[2638]: E20250517 00:19:34.296928 2638 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:34.296995 update_engine[2638]: I20250517 00:19:34.296975 2638 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:19:36.112509 kubelet[4097]: E0517 00:19:36.112448 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:19:44.296167 update_engine[2638]: I20250517 00:19:44.295938 2638 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:44.296502 update_engine[2638]: I20250517 00:19:44.296180 2638 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:44.296502 update_engine[2638]: I20250517 00:19:44.296361 2638 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:44.296875 update_engine[2638]: E20250517 00:19:44.296851 2638 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:44.296924 update_engine[2638]: I20250517 00:19:44.296912 2638 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:19:44.296951 update_engine[2638]: I20250517 00:19:44.296921 2638 omaha_request_action.cc:617] Omaha request response: May 17 00:19:44.297003 update_engine[2638]: E20250517 00:19:44.296990 2638 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:19:44.297027 update_engine[2638]: I20250517 00:19:44.297008 2638 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:19:44.297027 update_engine[2638]: I20250517 00:19:44.297014 2638 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:19:44.297027 update_engine[2638]: I20250517 00:19:44.297019 2638 update_attempter.cc:306] Processing Done. May 17 00:19:44.297085 update_engine[2638]: E20250517 00:19:44.297032 2638 update_attempter.cc:619] Update failed. May 17 00:19:44.297085 update_engine[2638]: I20250517 00:19:44.297039 2638 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:19:44.297085 update_engine[2638]: I20250517 00:19:44.297042 2638 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:19:44.297085 update_engine[2638]: I20250517 00:19:44.297047 2638 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:19:44.297169 update_engine[2638]: I20250517 00:19:44.297104 2638 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:19:44.297169 update_engine[2638]: I20250517 00:19:44.297124 2638 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:19:44.297169 update_engine[2638]: I20250517 00:19:44.297129 2638 omaha_request_action.cc:272] Request: May 17 00:19:44.297169 update_engine[2638]: May 17 00:19:44.297169 update_engine[2638]: May 17 00:19:44.297169 update_engine[2638]: May 17 00:19:44.297169 update_engine[2638]: May 17 00:19:44.297169 update_engine[2638]: May 17 00:19:44.297169 update_engine[2638]: May 17 00:19:44.297169 update_engine[2638]: I20250517 00:19:44.297135 2638 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:44.297340 update_engine[2638]: I20250517 00:19:44.297242 2638 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:44.297363 locksmithd[2664]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:19:44.297531 update_engine[2638]: I20250517 00:19:44.297389 2638 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:44.298034 update_engine[2638]: E20250517 00:19:44.298017 2638 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:44.298064 update_engine[2638]: I20250517 00:19:44.298051 2638 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:19:44.298064 update_engine[2638]: I20250517 00:19:44.298058 2638 omaha_request_action.cc:617] Omaha request response: May 17 00:19:44.298104 update_engine[2638]: I20250517 00:19:44.298063 2638 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:19:44.298104 update_engine[2638]: I20250517 00:19:44.298069 2638 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:19:44.298104 update_engine[2638]: I20250517 00:19:44.298074 2638 update_attempter.cc:306] Processing Done. May 17 00:19:44.298104 update_engine[2638]: I20250517 00:19:44.298079 2638 update_attempter.cc:310] Error event sent. May 17 00:19:44.298104 update_engine[2638]: I20250517 00:19:44.298085 2638 update_check_scheduler.cc:74] Next update check in 45m46s May 17 00:19:44.298241 locksmithd[2664]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:19:45.113039 kubelet[4097]: E0517 00:19:45.112991 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:19:47.113405 kubelet[4097]: E0517 00:19:47.113358 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:19:57.112926 kubelet[4097]: E0517 00:19:57.112831 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:19:59.113969 kubelet[4097]: E0517 00:19:59.113772 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:20:11.112294 kubelet[4097]: E0517 00:20:11.112218 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:20:12.112286 kubelet[4097]: E0517 00:20:12.112228 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:20:22.112494 containerd[2643]: time="2025-05-17T00:20:22.112402293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:20:22.186814 containerd[2643]: time="2025-05-17T00:20:22.186728465Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:20:22.187131 containerd[2643]: time="2025-05-17T00:20:22.187090907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:20:22.187210 containerd[2643]: time="2025-05-17T00:20:22.187172107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:20:22.187360 kubelet[4097]: E0517 00:20:22.187308 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:20:22.187737 kubelet[4097]: E0517 00:20:22.187365 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:20:22.187737 kubelet[4097]: E0517 00:20:22.187488 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d78f9656fa7f429e98165d5566619fe2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:20:22.189160 containerd[2643]: time="2025-05-17T00:20:22.189140396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:20:22.225881 containerd[2643]: time="2025-05-17T00:20:22.225841720Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:20:22.226102 containerd[2643]: time="2025-05-17T00:20:22.226080561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:20:22.226159 containerd[2643]: time="2025-05-17T00:20:22.226146562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:20:22.226268 kubelet[4097]: E0517 00:20:22.226230 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:20:22.226325 kubelet[4097]: E0517 00:20:22.226279 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:20:22.226456 kubelet[4097]: E0517 00:20:22.226422 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6758cf69db-8wts4_calico-system(f9ad7e26-3a56-408f-a437-28e846a147e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:20:22.227575 kubelet[4097]: E0517 00:20:22.227549 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:20:24.112441 kubelet[4097]: E0517 00:20:24.112411 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:20:26.930443 systemd[1]: Started sshd@10-147.28.129.25:22-218.92.0.158:36771.service - OpenSSH per-connection server daemon (218.92.0.158:36771). May 17 00:20:28.457666 sshd[11357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:20:30.454384 sshd[11347]: PAM: Permission denied for root from 218.92.0.158 May 17 00:20:30.865167 sshd[11379]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:20:32.803161 sshd[11347]: PAM: Permission denied for root from 218.92.0.158 May 17 00:20:33.213201 sshd[11380]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:20:34.894061 sshd[11347]: PAM: Permission denied for root from 218.92.0.158 May 17 00:20:35.099005 sshd[11347]: Received disconnect from 218.92.0.158 port 36771:11: [preauth] May 17 00:20:35.099005 sshd[11347]: Disconnected from authenticating user root 218.92.0.158 port 36771 [preauth] May 17 00:20:35.101064 systemd[1]: sshd@10-147.28.129.25:22-218.92.0.158:36771.service: Deactivated successfully. May 17 00:20:37.112582 containerd[2643]: time="2025-05-17T00:20:37.112547111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:20:37.152482 containerd[2643]: time="2025-05-17T00:20:37.152330199Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:20:37.152744 containerd[2643]: time="2025-05-17T00:20:37.152707480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:20:37.152803 containerd[2643]: time="2025-05-17T00:20:37.152780241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:20:37.153014 kubelet[4097]: E0517 00:20:37.152950 4097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:20:37.153379 kubelet[4097]: E0517 00:20:37.153033 4097 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:20:37.153379 kubelet[4097]: E0517 00:20:37.153190 4097 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-b5fcf_calico-system(a41cd5df-5d9c-4907-bb35-9d4adffa8017): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:20:37.154383 kubelet[4097]: E0517 00:20:37.154354 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:20:38.112584 kubelet[4097]: E0517 00:20:38.112543 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:20:49.112411 kubelet[4097]: E0517 00:20:49.112350 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:20:52.112455 kubelet[4097]: E0517 00:20:52.112394 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:21:04.112108 kubelet[4097]: E0517 00:21:04.112056 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:21:04.112527 kubelet[4097]: E0517 00:21:04.112335 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:21:18.112399 kubelet[4097]: E0517 00:21:18.112316 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:21:18.112785 kubelet[4097]: E0517 00:21:18.112723 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:21:29.112464 kubelet[4097]: E0517 00:21:29.112409 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:21:31.112715 kubelet[4097]: E0517 00:21:31.112670 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:21:43.112326 kubelet[4097]: E0517 00:21:43.112274 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:21:45.112263 kubelet[4097]: E0517 00:21:45.112202 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:21:55.112452 kubelet[4097]: E0517 00:21:55.112397 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:21:59.112948 kubelet[4097]: E0517 00:21:59.112873 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:22:09.112784 kubelet[4097]: E0517 00:22:09.112741 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:22:11.112556 kubelet[4097]: E0517 00:22:11.112496 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:22:21.112551 kubelet[4097]: E0517 00:22:21.112498 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:22:23.112549 kubelet[4097]: E0517 00:22:23.112499 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:22:33.114552 kubelet[4097]: E0517 00:22:33.114493 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:22:36.112948 kubelet[4097]: E0517 00:22:36.112883 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:22:45.112425 kubelet[4097]: E0517 00:22:45.112224 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:22:47.112344 kubelet[4097]: E0517 00:22:47.112300 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:22:49.756383 systemd[1]: Started sshd@11-147.28.129.25:22-218.92.0.158:24052.service - OpenSSH per-connection server daemon (218.92.0.158:24052). May 17 00:22:51.267422 sshd[11722]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:22:53.560486 sshd[11720]: PAM: Permission denied for root from 218.92.0.158 May 17 00:22:53.966052 sshd[11723]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:22:56.534965 sshd[11720]: PAM: Permission denied for root from 218.92.0.158 May 17 00:22:56.940434 sshd[11726]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:22:58.112973 kubelet[4097]: E0517 00:22:58.112931 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:22:59.114979 kubelet[4097]: E0517 00:22:59.114932 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:22:59.253681 sshd[11720]: PAM: Permission denied for root from 218.92.0.158 May 17 00:22:59.456268 sshd[11720]: Received disconnect from 218.92.0.158 port 24052:11: [preauth] May 17 00:22:59.456268 sshd[11720]: Disconnected from authenticating user root 218.92.0.158 port 24052 [preauth] May 17 00:22:59.458388 systemd[1]: sshd@11-147.28.129.25:22-218.92.0.158:24052.service: Deactivated successfully. May 17 00:23:09.112238 kubelet[4097]: E0517 00:23:09.112183 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:23:14.112477 kubelet[4097]: E0517 00:23:14.112421 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:23:14.178393 systemd[1]: Started sshd@12-147.28.129.25:22-147.75.109.163:46602.service - OpenSSH per-connection server daemon (147.75.109.163:46602). May 17 00:23:14.582195 sshd[11802]: Accepted publickey for core from 147.75.109.163 port 46602 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:14.583375 sshd[11802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:14.586956 systemd-logind[2631]: New session 10 of user core. May 17 00:23:14.607058 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:23:14.943194 sshd[11802]: pam_unix(sshd:session): session closed for user core May 17 00:23:14.946128 systemd[1]: sshd@12-147.28.129.25:22-147.75.109.163:46602.service: Deactivated successfully. May 17 00:23:14.947828 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:23:14.948393 systemd-logind[2631]: Session 10 logged out. Waiting for processes to exit. May 17 00:23:14.949003 systemd-logind[2631]: Removed session 10. May 17 00:23:20.019459 systemd[1]: Started sshd@13-147.28.129.25:22-147.75.109.163:55380.service - OpenSSH per-connection server daemon (147.75.109.163:55380). May 17 00:23:20.112752 kubelet[4097]: E0517 00:23:20.112703 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:23:20.423886 sshd[11884]: Accepted publickey for core from 147.75.109.163 port 55380 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:20.425106 sshd[11884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:20.428253 systemd-logind[2631]: New session 11 of user core. May 17 00:23:20.441997 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:23:20.781447 sshd[11884]: pam_unix(sshd:session): session closed for user core May 17 00:23:20.784284 systemd[1]: sshd@13-147.28.129.25:22-147.75.109.163:55380.service: Deactivated successfully. May 17 00:23:20.786011 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:23:20.786507 systemd-logind[2631]: Session 11 logged out. Waiting for processes to exit. May 17 00:23:20.787061 systemd-logind[2631]: Removed session 11. May 17 00:23:20.858378 systemd[1]: Started sshd@14-147.28.129.25:22-147.75.109.163:55386.service - OpenSSH per-connection server daemon (147.75.109.163:55386). May 17 00:23:21.263463 sshd[11925]: Accepted publickey for core from 147.75.109.163 port 55386 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:21.264701 sshd[11925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:21.267780 systemd-logind[2631]: New session 12 of user core. May 17 00:23:21.283064 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:23:21.637581 sshd[11925]: pam_unix(sshd:session): session closed for user core May 17 00:23:21.640532 systemd[1]: sshd@14-147.28.129.25:22-147.75.109.163:55386.service: Deactivated successfully. May 17 00:23:21.642234 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:23:21.642761 systemd-logind[2631]: Session 12 logged out. Waiting for processes to exit. May 17 00:23:21.643343 systemd-logind[2631]: Removed session 12. May 17 00:23:21.711302 systemd[1]: Started sshd@15-147.28.129.25:22-147.75.109.163:55390.service - OpenSSH per-connection server daemon (147.75.109.163:55390). May 17 00:23:22.113423 sshd[11967]: Accepted publickey for core from 147.75.109.163 port 55390 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:22.114639 sshd[11967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:22.117667 systemd-logind[2631]: New session 13 of user core. May 17 00:23:22.133022 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:23:22.464071 sshd[11967]: pam_unix(sshd:session): session closed for user core May 17 00:23:22.467031 systemd[1]: sshd@15-147.28.129.25:22-147.75.109.163:55390.service: Deactivated successfully. May 17 00:23:22.468703 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:23:22.469225 systemd-logind[2631]: Session 13 logged out. Waiting for processes to exit. May 17 00:23:22.469785 systemd-logind[2631]: Removed session 13. May 17 00:23:27.114270 kubelet[4097]: E0517 00:23:27.114223 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:23:27.535318 systemd[1]: Started sshd@16-147.28.129.25:22-147.75.109.163:55394.service - OpenSSH per-connection server daemon (147.75.109.163:55394). May 17 00:23:27.928869 sshd[12017]: Accepted publickey for core from 147.75.109.163 port 55394 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:27.929959 sshd[12017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:27.932911 systemd-logind[2631]: New session 14 of user core. May 17 00:23:27.946991 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:23:28.275079 sshd[12017]: pam_unix(sshd:session): session closed for user core May 17 00:23:28.277953 systemd[1]: sshd@16-147.28.129.25:22-147.75.109.163:55394.service: Deactivated successfully. May 17 00:23:28.280228 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:23:28.280735 systemd-logind[2631]: Session 14 logged out. Waiting for processes to exit. May 17 00:23:28.281316 systemd-logind[2631]: Removed session 14. May 17 00:23:28.357400 systemd[1]: Started sshd@17-147.28.129.25:22-147.75.109.163:52790.service - OpenSSH per-connection server daemon (147.75.109.163:52790). May 17 00:23:28.770041 sshd[12055]: Accepted publickey for core from 147.75.109.163 port 52790 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:28.771228 sshd[12055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:28.774239 systemd-logind[2631]: New session 15 of user core. May 17 00:23:28.784007 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:23:29.237724 sshd[12055]: pam_unix(sshd:session): session closed for user core May 17 00:23:29.240510 systemd[1]: sshd@17-147.28.129.25:22-147.75.109.163:52790.service: Deactivated successfully. May 17 00:23:29.242232 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:23:29.242751 systemd-logind[2631]: Session 15 logged out. Waiting for processes to exit. May 17 00:23:29.243347 systemd-logind[2631]: Removed session 15. May 17 00:23:29.308302 systemd[1]: Started sshd@18-147.28.129.25:22-147.75.109.163:52792.service - OpenSSH per-connection server daemon (147.75.109.163:52792). May 17 00:23:29.710300 sshd[12090]: Accepted publickey for core from 147.75.109.163 port 52792 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:29.711462 sshd[12090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:29.714516 systemd-logind[2631]: New session 16 of user core. May 17 00:23:29.734064 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:23:30.509989 sshd[12090]: pam_unix(sshd:session): session closed for user core May 17 00:23:30.512886 systemd[1]: sshd@18-147.28.129.25:22-147.75.109.163:52792.service: Deactivated successfully. May 17 00:23:30.514633 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:23:30.515181 systemd-logind[2631]: Session 16 logged out. Waiting for processes to exit. May 17 00:23:30.515752 systemd-logind[2631]: Removed session 16. May 17 00:23:30.585310 systemd[1]: Started sshd@19-147.28.129.25:22-147.75.109.163:52806.service - OpenSSH per-connection server daemon (147.75.109.163:52806). May 17 00:23:30.998779 sshd[12173]: Accepted publickey for core from 147.75.109.163 port 52806 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:30.999952 sshd[12173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:31.002993 systemd-logind[2631]: New session 17 of user core. May 17 00:23:31.019005 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:23:31.442145 sshd[12173]: pam_unix(sshd:session): session closed for user core May 17 00:23:31.445075 systemd[1]: sshd@19-147.28.129.25:22-147.75.109.163:52806.service: Deactivated successfully. May 17 00:23:31.446793 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:23:31.447343 systemd-logind[2631]: Session 17 logged out. Waiting for processes to exit. May 17 00:23:31.447934 systemd-logind[2631]: Removed session 17. May 17 00:23:31.513387 systemd[1]: Started sshd@20-147.28.129.25:22-147.75.109.163:52808.service - OpenSSH per-connection server daemon (147.75.109.163:52808). May 17 00:23:31.916314 sshd[12218]: Accepted publickey for core from 147.75.109.163 port 52808 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:31.917430 sshd[12218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:31.920306 systemd-logind[2631]: New session 18 of user core. May 17 00:23:31.936060 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:23:32.266072 sshd[12218]: pam_unix(sshd:session): session closed for user core May 17 00:23:32.268941 systemd[1]: sshd@20-147.28.129.25:22-147.75.109.163:52808.service: Deactivated successfully. May 17 00:23:32.271229 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:23:32.271786 systemd-logind[2631]: Session 18 logged out. Waiting for processes to exit. May 17 00:23:32.272431 systemd-logind[2631]: Removed session 18. May 17 00:23:35.112558 kubelet[4097]: E0517 00:23:35.112504 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:23:37.341433 systemd[1]: Started sshd@21-147.28.129.25:22-147.75.109.163:52812.service - OpenSSH per-connection server daemon (147.75.109.163:52812). May 17 00:23:37.746423 sshd[12252]: Accepted publickey for core from 147.75.109.163 port 52812 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:37.747483 sshd[12252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:37.750389 systemd-logind[2631]: New session 19 of user core. May 17 00:23:37.764004 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:23:38.096881 sshd[12252]: pam_unix(sshd:session): session closed for user core May 17 00:23:38.099711 systemd[1]: sshd@21-147.28.129.25:22-147.75.109.163:52812.service: Deactivated successfully. May 17 00:23:38.101434 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:23:38.101972 systemd-logind[2631]: Session 19 logged out. Waiting for processes to exit. May 17 00:23:38.102511 systemd-logind[2631]: Removed session 19. May 17 00:23:42.112777 kubelet[4097]: E0517 00:23:42.112723 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-b5fcf" podUID="a41cd5df-5d9c-4907-bb35-9d4adffa8017" May 17 00:23:43.171370 systemd[1]: Started sshd@22-147.28.129.25:22-147.75.109.163:47468.service - OpenSSH per-connection server daemon (147.75.109.163:47468). May 17 00:23:43.573895 sshd[12289]: Accepted publickey for core from 147.75.109.163 port 47468 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:43.575178 sshd[12289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:43.578292 systemd-logind[2631]: New session 20 of user core. May 17 00:23:43.587075 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:23:43.936423 sshd[12289]: pam_unix(sshd:session): session closed for user core May 17 00:23:43.939358 systemd[1]: sshd@22-147.28.129.25:22-147.75.109.163:47468.service: Deactivated successfully. May 17 00:23:43.941058 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:23:43.941575 systemd-logind[2631]: Session 20 logged out. Waiting for processes to exit. May 17 00:23:43.942148 systemd-logind[2631]: Removed session 20. May 17 00:23:47.114570 kubelet[4097]: E0517 00:23:47.114512 4097 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6758cf69db-8wts4" podUID="f9ad7e26-3a56-408f-a437-28e846a147e2" May 17 00:23:49.009327 systemd[1]: Started sshd@23-147.28.129.25:22-147.75.109.163:53964.service - OpenSSH per-connection server daemon (147.75.109.163:53964). May 17 00:23:49.413533 sshd[12358]: Accepted publickey for core from 147.75.109.163 port 53964 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:23:49.414852 sshd[12358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:49.418064 systemd-logind[2631]: New session 21 of user core. May 17 00:23:49.441068 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:23:49.764593 sshd[12358]: pam_unix(sshd:session): session closed for user core May 17 00:23:49.767556 systemd[1]: sshd@23-147.28.129.25:22-147.75.109.163:53964.service: Deactivated successfully. May 17 00:23:49.769204 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:23:49.769733 systemd-logind[2631]: Session 21 logged out. Waiting for processes to exit. May 17 00:23:49.770313 systemd-logind[2631]: Removed session 21.