May 17 01:47:07.150481 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 01:47:07.150503 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 01:47:07.150512 kernel: KASLR enabled May 17 01:47:07.150517 kernel: efi: EFI v2.7 by American Megatrends May 17 01:47:07.150523 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea464818 RNG=0xebf00018 MEMRESERVE=0xe4663f98 May 17 01:47:07.150529 kernel: random: crng init done May 17 01:47:07.150536 kernel: esrt: Reserving ESRT space from 0x00000000ea464818 to 0x00000000ea464878. May 17 01:47:07.150542 kernel: ACPI: Early table checksum verification disabled May 17 01:47:07.150550 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 01:47:07.150556 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 01:47:07.150562 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 01:47:07.150568 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 01:47:07.150574 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 01:47:07.150580 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 01:47:07.150589 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 01:47:07.150595 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:47:07.150602 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 01:47:07.150608 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:47:07.150615 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 01:47:07.150621 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150628 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150634 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150640 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150648 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 01:47:07.150655 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 01:47:07.150661 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:47:07.150667 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 01:47:07.150674 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 01:47:07.150680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 01:47:07.150687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 01:47:07.150693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 01:47:07.150700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 01:47:07.150706 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 17 01:47:07.150712 kernel: Zone ranges: May 17 01:47:07.150719 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 01:47:07.150726 kernel: DMA32 empty May 17 01:47:07.150733 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 01:47:07.150739 kernel: Movable zone start for each node May 17 01:47:07.150745 kernel: Early memory node ranges May 17 01:47:07.150752 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 01:47:07.150761 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 01:47:07.150768 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 01:47:07.150776 kernel: node 0: [mem 0x0000000094000000-0x00000000eba36fff] May 17 01:47:07.150782 kernel: node 0: [mem 0x00000000eba37000-0x00000000ebeadfff] May 17 01:47:07.150789 kernel: node 0: [mem 0x00000000ebeae000-0x00000000ebeaefff] May 17 01:47:07.150796 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 17 01:47:07.150802 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 01:47:07.150809 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 01:47:07.150815 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 01:47:07.150822 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 01:47:07.150829 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 01:47:07.150835 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 01:47:07.150844 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 01:47:07.150850 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 01:47:07.150857 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 01:47:07.150864 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 01:47:07.150870 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 01:47:07.150877 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 01:47:07.150884 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 01:47:07.150891 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 01:47:07.150897 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 01:47:07.150904 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 01:47:07.150911 kernel: psci: probing for conduit method from ACPI. May 17 01:47:07.150919 kernel: psci: PSCIv1.1 detected in firmware. May 17 01:47:07.150926 kernel: psci: Using standard PSCI v0.2 function IDs May 17 01:47:07.150932 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 01:47:07.150939 kernel: psci: SMC Calling Convention v1.2 May 17 01:47:07.150946 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 01:47:07.150952 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 01:47:07.150959 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 01:47:07.150966 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 01:47:07.150973 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 01:47:07.150979 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 01:47:07.150986 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 01:47:07.150993 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 01:47:07.151001 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 01:47:07.151007 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 01:47:07.151014 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 01:47:07.151021 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 01:47:07.151027 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 01:47:07.151034 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 01:47:07.151041 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 01:47:07.151047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 01:47:07.151054 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 01:47:07.151061 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 01:47:07.151067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 01:47:07.151074 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 01:47:07.151082 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 01:47:07.151089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 01:47:07.151095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 01:47:07.151102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 01:47:07.151109 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 01:47:07.151115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 01:47:07.151122 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 01:47:07.151128 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 01:47:07.151172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 01:47:07.151179 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 01:47:07.151186 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 01:47:07.151194 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 01:47:07.151201 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 01:47:07.151208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 01:47:07.151215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 01:47:07.151221 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 01:47:07.151228 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 01:47:07.151235 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 01:47:07.151241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 01:47:07.151248 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 01:47:07.151255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 01:47:07.151262 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 01:47:07.151268 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 01:47:07.151277 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 01:47:07.151283 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 01:47:07.151290 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 01:47:07.151297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 01:47:07.151303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 01:47:07.151310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 01:47:07.151317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 01:47:07.151323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 01:47:07.151337 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 01:47:07.151344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 01:47:07.151353 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 01:47:07.151360 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 01:47:07.151367 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 01:47:07.151374 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 01:47:07.151381 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 01:47:07.151389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 01:47:07.151397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 01:47:07.151404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 01:47:07.151412 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 01:47:07.151419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 01:47:07.151426 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 01:47:07.151433 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 01:47:07.151440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 01:47:07.151447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 01:47:07.151454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 01:47:07.151461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 01:47:07.151468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 01:47:07.151476 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 01:47:07.151484 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 01:47:07.151491 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 01:47:07.151498 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 01:47:07.151505 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 01:47:07.151512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 01:47:07.151519 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 01:47:07.151526 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 01:47:07.151534 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 01:47:07.151541 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 01:47:07.151548 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 01:47:07.151555 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 01:47:07.151564 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 01:47:07.151571 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 01:47:07.151578 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 01:47:07.151585 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 01:47:07.151592 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 01:47:07.151599 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 01:47:07.151606 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 01:47:07.151613 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 01:47:07.151620 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 01:47:07.151627 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 01:47:07.151634 kernel: Detected PIPT I-cache on CPU0 May 17 01:47:07.151643 kernel: CPU features: detected: GIC system register CPU interface May 17 01:47:07.151650 kernel: CPU features: detected: Virtualization Host Extensions May 17 01:47:07.151657 kernel: CPU features: detected: Hardware dirty bit management May 17 01:47:07.151665 kernel: CPU features: detected: Spectre-v4 May 17 01:47:07.151672 kernel: CPU features: detected: Spectre-BHB May 17 01:47:07.151679 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 01:47:07.151686 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 01:47:07.151694 kernel: CPU features: detected: ARM erratum 1418040 May 17 01:47:07.151701 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 01:47:07.151708 kernel: alternatives: applying boot alternatives May 17 01:47:07.151717 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 01:47:07.151725 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:47:07.151733 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 01:47:07.151740 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 01:47:07.151747 kernel: printk: log_buf_len min size: 262144 bytes May 17 01:47:07.151754 kernel: printk: log_buf_len: 1048576 bytes May 17 01:47:07.151761 kernel: printk: early log buf free: 249904(95%) May 17 01:47:07.151769 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 01:47:07.151776 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 01:47:07.151783 kernel: Fallback order for Node 0: 0 May 17 01:47:07.151790 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 01:47:07.151797 kernel: Policy zone: Normal May 17 01:47:07.151806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:47:07.151813 kernel: software IO TLB: area num 128. May 17 01:47:07.151820 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 01:47:07.151827 kernel: Memory: 262922456K/268174336K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 5251880K reserved, 0K cma-reserved) May 17 01:47:07.151835 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 01:47:07.151842 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 01:47:07.151850 kernel: rcu: RCU event tracing is enabled. May 17 01:47:07.151857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 01:47:07.151864 kernel: Trampoline variant of Tasks RCU enabled. May 17 01:47:07.151872 kernel: Tracing variant of Tasks RCU enabled. May 17 01:47:07.151879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:47:07.151887 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 01:47:07.151895 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 01:47:07.151902 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 01:47:07.151909 kernel: GICv3: 672 SPIs implemented May 17 01:47:07.151916 kernel: GICv3: 0 Extended SPIs implemented May 17 01:47:07.151923 kernel: Root IRQ handler: gic_handle_irq May 17 01:47:07.151930 kernel: GICv3: GICv3 features: 16 PPIs May 17 01:47:07.151937 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 01:47:07.151944 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 01:47:07.151952 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 01:47:07.151959 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 01:47:07.151966 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 01:47:07.151973 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 01:47:07.151981 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 01:47:07.151988 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 01:47:07.151995 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 01:47:07.152002 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 01:47:07.152010 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152017 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152024 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 01:47:07.152031 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152039 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152046 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 01:47:07.152054 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152062 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152069 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 01:47:07.152077 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152084 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152091 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 01:47:07.152099 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152106 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152113 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 01:47:07.152120 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152127 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152137 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 01:47:07.152146 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152153 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152160 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 01:47:07.152168 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152175 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152182 kernel: GICv3: using LPI property table @0x00000800003e0000 May 17 01:47:07.152189 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 17 01:47:07.152197 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 01:47:07.152204 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152211 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 01:47:07.152218 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 01:47:07.152227 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 01:47:07.152234 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 01:47:07.152241 kernel: Console: colour dummy device 80x25 May 17 01:47:07.152249 kernel: printk: console [tty0] enabled May 17 01:47:07.152256 kernel: ACPI: Core revision 20230628 May 17 01:47:07.152263 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 01:47:07.152271 kernel: pid_max: default: 81920 minimum: 640 May 17 01:47:07.152278 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 01:47:07.152285 kernel: landlock: Up and running. May 17 01:47:07.152293 kernel: SELinux: Initializing. May 17 01:47:07.152302 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:47:07.152309 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:47:07.152316 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 01:47:07.152324 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 01:47:07.152331 kernel: rcu: Hierarchical SRCU implementation. May 17 01:47:07.152339 kernel: rcu: Max phase no-delay instances is 400. May 17 01:47:07.152346 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 01:47:07.152353 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 01:47:07.152360 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 01:47:07.152369 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 01:47:07.152376 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 01:47:07.152383 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 01:47:07.152390 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 01:47:07.152397 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 01:47:07.152405 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 01:47:07.152412 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 01:47:07.152419 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 01:47:07.152426 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 01:47:07.152435 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 01:47:07.152442 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 01:47:07.152449 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 01:47:07.152456 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 01:47:07.152463 kernel: Remapping and enabling EFI services. May 17 01:47:07.152471 kernel: smp: Bringing up secondary CPUs ... May 17 01:47:07.152478 kernel: Detected PIPT I-cache on CPU1 May 17 01:47:07.152485 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 01:47:07.152493 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 17 01:47:07.152502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152509 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 01:47:07.152516 kernel: Detected PIPT I-cache on CPU2 May 17 01:47:07.152524 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 01:47:07.152531 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 17 01:47:07.152538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152545 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 01:47:07.152552 kernel: Detected PIPT I-cache on CPU3 May 17 01:47:07.152560 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 01:47:07.152567 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 17 01:47:07.152576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152583 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 01:47:07.152590 kernel: Detected PIPT I-cache on CPU4 May 17 01:47:07.152597 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 01:47:07.152605 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 17 01:47:07.152612 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152619 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 01:47:07.152626 kernel: Detected PIPT I-cache on CPU5 May 17 01:47:07.152633 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 01:47:07.152642 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 17 01:47:07.152649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152657 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 01:47:07.152664 kernel: Detected PIPT I-cache on CPU6 May 17 01:47:07.152671 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 01:47:07.152679 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 17 01:47:07.152686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152693 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 01:47:07.152700 kernel: Detected PIPT I-cache on CPU7 May 17 01:47:07.152708 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 01:47:07.152716 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 17 01:47:07.152724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152731 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 01:47:07.152738 kernel: Detected PIPT I-cache on CPU8 May 17 01:47:07.152746 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 01:47:07.152753 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 17 01:47:07.152760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152767 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 01:47:07.152774 kernel: Detected PIPT I-cache on CPU9 May 17 01:47:07.152782 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 01:47:07.152790 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 17 01:47:07.152797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152805 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 01:47:07.152812 kernel: Detected PIPT I-cache on CPU10 May 17 01:47:07.152819 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 01:47:07.152826 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 17 01:47:07.152834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152841 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 01:47:07.152848 kernel: Detected PIPT I-cache on CPU11 May 17 01:47:07.152857 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 01:47:07.152864 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 17 01:47:07.152872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152879 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 01:47:07.152886 kernel: Detected PIPT I-cache on CPU12 May 17 01:47:07.152893 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 01:47:07.152901 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 17 01:47:07.152908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152915 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 01:47:07.152923 kernel: Detected PIPT I-cache on CPU13 May 17 01:47:07.152932 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 01:47:07.152939 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 17 01:47:07.152946 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152954 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 01:47:07.152961 kernel: Detected PIPT I-cache on CPU14 May 17 01:47:07.152968 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 01:47:07.152976 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 17 01:47:07.152983 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152990 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 01:47:07.152999 kernel: Detected PIPT I-cache on CPU15 May 17 01:47:07.153006 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 01:47:07.153013 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 17 01:47:07.153021 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153028 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 01:47:07.153035 kernel: Detected PIPT I-cache on CPU16 May 17 01:47:07.153043 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 01:47:07.153050 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 17 01:47:07.153057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153074 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 01:47:07.153083 kernel: Detected PIPT I-cache on CPU17 May 17 01:47:07.153090 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 01:47:07.153098 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 17 01:47:07.153105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153113 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 01:47:07.153121 kernel: Detected PIPT I-cache on CPU18 May 17 01:47:07.153128 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 01:47:07.153139 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 17 01:47:07.153148 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153156 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 01:47:07.153164 kernel: Detected PIPT I-cache on CPU19 May 17 01:47:07.153171 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 01:47:07.153179 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 17 01:47:07.153186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153194 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 01:47:07.153203 kernel: Detected PIPT I-cache on CPU20 May 17 01:47:07.153211 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 01:47:07.153219 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 17 01:47:07.153226 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153234 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 01:47:07.153242 kernel: Detected PIPT I-cache on CPU21 May 17 01:47:07.153251 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 01:47:07.153259 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 17 01:47:07.153266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153275 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 01:47:07.153283 kernel: Detected PIPT I-cache on CPU22 May 17 01:47:07.153290 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 01:47:07.153298 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 17 01:47:07.153306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153314 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 01:47:07.153321 kernel: Detected PIPT I-cache on CPU23 May 17 01:47:07.153329 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 01:47:07.153336 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 17 01:47:07.153346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153354 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 01:47:07.153361 kernel: Detected PIPT I-cache on CPU24 May 17 01:47:07.153369 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 01:47:07.153377 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 17 01:47:07.153384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153392 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 01:47:07.153401 kernel: Detected PIPT I-cache on CPU25 May 17 01:47:07.153409 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 01:47:07.153416 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 17 01:47:07.153425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153433 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 01:47:07.153441 kernel: Detected PIPT I-cache on CPU26 May 17 01:47:07.153448 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 01:47:07.153456 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 17 01:47:07.153464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153471 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 01:47:07.153479 kernel: Detected PIPT I-cache on CPU27 May 17 01:47:07.153487 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 01:47:07.153496 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 17 01:47:07.153503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153511 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 01:47:07.153519 kernel: Detected PIPT I-cache on CPU28 May 17 01:47:07.153526 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 01:47:07.153534 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 17 01:47:07.153542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153550 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 01:47:07.153557 kernel: Detected PIPT I-cache on CPU29 May 17 01:47:07.153565 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 01:47:07.153574 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 17 01:47:07.153582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153590 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 01:47:07.153597 kernel: Detected PIPT I-cache on CPU30 May 17 01:47:07.153605 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 01:47:07.153613 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 17 01:47:07.153621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153628 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 01:47:07.153636 kernel: Detected PIPT I-cache on CPU31 May 17 01:47:07.153645 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 01:47:07.153653 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 17 01:47:07.153660 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153668 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 01:47:07.153676 kernel: Detected PIPT I-cache on CPU32 May 17 01:47:07.153683 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 01:47:07.153691 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 17 01:47:07.153698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153706 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 01:47:07.153715 kernel: Detected PIPT I-cache on CPU33 May 17 01:47:07.153723 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 01:47:07.153731 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 17 01:47:07.153738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153746 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 01:47:07.153753 kernel: Detected PIPT I-cache on CPU34 May 17 01:47:07.153761 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 01:47:07.153769 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 17 01:47:07.153776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153784 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 01:47:07.153793 kernel: Detected PIPT I-cache on CPU35 May 17 01:47:07.153801 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 01:47:07.153808 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 17 01:47:07.153816 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153823 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 01:47:07.153831 kernel: Detected PIPT I-cache on CPU36 May 17 01:47:07.153839 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 01:47:07.153846 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 17 01:47:07.153854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153863 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 01:47:07.153871 kernel: Detected PIPT I-cache on CPU37 May 17 01:47:07.153878 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 01:47:07.153886 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 17 01:47:07.153894 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153901 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 01:47:07.153909 kernel: Detected PIPT I-cache on CPU38 May 17 01:47:07.153916 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 01:47:07.153925 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 17 01:47:07.153933 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153942 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 01:47:07.153950 kernel: Detected PIPT I-cache on CPU39 May 17 01:47:07.153957 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 01:47:07.153965 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 17 01:47:07.153973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153981 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 01:47:07.153988 kernel: Detected PIPT I-cache on CPU40 May 17 01:47:07.153996 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 01:47:07.154005 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 17 01:47:07.154013 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154020 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 01:47:07.154028 kernel: Detected PIPT I-cache on CPU41 May 17 01:47:07.154036 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 01:47:07.154043 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 17 01:47:07.154051 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154059 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 01:47:07.154066 kernel: Detected PIPT I-cache on CPU42 May 17 01:47:07.154075 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 01:47:07.154083 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 17 01:47:07.154091 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154098 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 01:47:07.154106 kernel: Detected PIPT I-cache on CPU43 May 17 01:47:07.154113 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 01:47:07.154121 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 17 01:47:07.154129 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154139 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 01:47:07.154146 kernel: Detected PIPT I-cache on CPU44 May 17 01:47:07.154156 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 01:47:07.154164 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 17 01:47:07.154171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154179 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 01:47:07.154186 kernel: Detected PIPT I-cache on CPU45 May 17 01:47:07.154194 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 01:47:07.154202 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 17 01:47:07.154210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154218 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 01:47:07.154227 kernel: Detected PIPT I-cache on CPU46 May 17 01:47:07.154234 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 01:47:07.154242 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 17 01:47:07.154250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154257 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 01:47:07.154265 kernel: Detected PIPT I-cache on CPU47 May 17 01:47:07.154272 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 01:47:07.154280 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 17 01:47:07.154288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154295 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 01:47:07.154304 kernel: Detected PIPT I-cache on CPU48 May 17 01:47:07.154312 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 01:47:07.154319 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 17 01:47:07.154327 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154335 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 01:47:07.154342 kernel: Detected PIPT I-cache on CPU49 May 17 01:47:07.154350 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 01:47:07.154358 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 17 01:47:07.154365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154374 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 01:47:07.154382 kernel: Detected PIPT I-cache on CPU50 May 17 01:47:07.154391 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 01:47:07.154399 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 17 01:47:07.154406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154414 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 01:47:07.154421 kernel: Detected PIPT I-cache on CPU51 May 17 01:47:07.154429 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 01:47:07.154437 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 17 01:47:07.154446 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154454 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 01:47:07.154461 kernel: Detected PIPT I-cache on CPU52 May 17 01:47:07.154469 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 01:47:07.154477 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 17 01:47:07.154485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154492 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 01:47:07.154500 kernel: Detected PIPT I-cache on CPU53 May 17 01:47:07.154508 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 01:47:07.154515 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 17 01:47:07.154525 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154532 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 01:47:07.154540 kernel: Detected PIPT I-cache on CPU54 May 17 01:47:07.154548 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 01:47:07.154555 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 17 01:47:07.154563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154571 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 01:47:07.154578 kernel: Detected PIPT I-cache on CPU55 May 17 01:47:07.154586 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 01:47:07.154595 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 17 01:47:07.154603 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154610 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 01:47:07.154618 kernel: Detected PIPT I-cache on CPU56 May 17 01:47:07.154625 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 01:47:07.154633 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 17 01:47:07.154641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154648 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 01:47:07.154657 kernel: Detected PIPT I-cache on CPU57 May 17 01:47:07.154665 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 01:47:07.154674 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 17 01:47:07.154682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154689 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 01:47:07.154697 kernel: Detected PIPT I-cache on CPU58 May 17 01:47:07.154704 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 01:47:07.154712 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 17 01:47:07.154720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154728 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 01:47:07.154736 kernel: Detected PIPT I-cache on CPU59 May 17 01:47:07.154745 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 01:47:07.154752 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 17 01:47:07.154760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154768 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 01:47:07.154775 kernel: Detected PIPT I-cache on CPU60 May 17 01:47:07.154783 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 01:47:07.154791 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 17 01:47:07.154799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154806 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 01:47:07.154814 kernel: Detected PIPT I-cache on CPU61 May 17 01:47:07.154823 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 01:47:07.154831 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 17 01:47:07.154839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154846 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 01:47:07.154854 kernel: Detected PIPT I-cache on CPU62 May 17 01:47:07.154861 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 01:47:07.154869 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 17 01:47:07.154877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154884 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 01:47:07.154893 kernel: Detected PIPT I-cache on CPU63 May 17 01:47:07.154901 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 01:47:07.154909 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 17 01:47:07.154917 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154924 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 01:47:07.154932 kernel: Detected PIPT I-cache on CPU64 May 17 01:47:07.154940 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 01:47:07.154947 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 17 01:47:07.154955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154963 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 01:47:07.154971 kernel: Detected PIPT I-cache on CPU65 May 17 01:47:07.154979 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 01:47:07.154987 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 17 01:47:07.154995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155002 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 01:47:07.155010 kernel: Detected PIPT I-cache on CPU66 May 17 01:47:07.155018 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 01:47:07.155025 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 17 01:47:07.155033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155042 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 01:47:07.155050 kernel: Detected PIPT I-cache on CPU67 May 17 01:47:07.155058 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 01:47:07.155066 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 17 01:47:07.155073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155081 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 01:47:07.155088 kernel: Detected PIPT I-cache on CPU68 May 17 01:47:07.155096 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 01:47:07.155104 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 17 01:47:07.155113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155120 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 01:47:07.155128 kernel: Detected PIPT I-cache on CPU69 May 17 01:47:07.155138 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 01:47:07.155146 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 17 01:47:07.155153 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155161 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 01:47:07.155169 kernel: Detected PIPT I-cache on CPU70 May 17 01:47:07.155176 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 01:47:07.155184 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 17 01:47:07.155193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155201 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 01:47:07.155208 kernel: Detected PIPT I-cache on CPU71 May 17 01:47:07.155216 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 01:47:07.155224 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 17 01:47:07.155231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155239 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 01:47:07.155247 kernel: Detected PIPT I-cache on CPU72 May 17 01:47:07.155254 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 01:47:07.155264 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 17 01:47:07.155271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155279 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 01:47:07.155286 kernel: Detected PIPT I-cache on CPU73 May 17 01:47:07.155294 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 01:47:07.155302 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 17 01:47:07.155309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155317 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 01:47:07.155325 kernel: Detected PIPT I-cache on CPU74 May 17 01:47:07.155332 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 01:47:07.155342 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 17 01:47:07.155349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155357 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 01:47:07.155365 kernel: Detected PIPT I-cache on CPU75 May 17 01:47:07.155372 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 01:47:07.155380 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 17 01:47:07.155388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155395 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 01:47:07.155403 kernel: Detected PIPT I-cache on CPU76 May 17 01:47:07.155412 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 01:47:07.155420 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 17 01:47:07.155427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155435 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 01:47:07.155443 kernel: Detected PIPT I-cache on CPU77 May 17 01:47:07.155450 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 01:47:07.155458 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 17 01:47:07.155466 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155473 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 01:47:07.155481 kernel: Detected PIPT I-cache on CPU78 May 17 01:47:07.155490 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 01:47:07.155498 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 17 01:47:07.155506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155513 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 01:47:07.155521 kernel: Detected PIPT I-cache on CPU79 May 17 01:47:07.155529 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 01:47:07.155536 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 17 01:47:07.155544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155552 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 01:47:07.155561 kernel: smp: Brought up 1 node, 80 CPUs May 17 01:47:07.155568 kernel: SMP: Total of 80 processors activated. May 17 01:47:07.155576 kernel: CPU features: detected: 32-bit EL0 Support May 17 01:47:07.155584 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 01:47:07.155592 kernel: CPU features: detected: Common not Private translations May 17 01:47:07.155599 kernel: CPU features: detected: CRC32 instructions May 17 01:47:07.155607 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 01:47:07.155615 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 01:47:07.155623 kernel: CPU features: detected: LSE atomic instructions May 17 01:47:07.155632 kernel: CPU features: detected: Privileged Access Never May 17 01:47:07.155639 kernel: CPU features: detected: RAS Extension Support May 17 01:47:07.155647 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 01:47:07.155654 kernel: CPU: All CPU(s) started at EL2 May 17 01:47:07.155662 kernel: alternatives: applying system-wide alternatives May 17 01:47:07.155670 kernel: devtmpfs: initialized May 17 01:47:07.155678 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:47:07.155685 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 01:47:07.155693 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:47:07.155702 kernel: SMBIOS 3.4.0 present. May 17 01:47:07.155710 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 01:47:07.155718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:47:07.155725 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 01:47:07.155733 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 01:47:07.155741 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 01:47:07.155748 kernel: audit: initializing netlink subsys (disabled) May 17 01:47:07.155756 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 17 01:47:07.155764 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:47:07.155773 kernel: cpuidle: using governor menu May 17 01:47:07.155781 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 01:47:07.155788 kernel: ASID allocator initialised with 32768 entries May 17 01:47:07.155796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:47:07.155804 kernel: Serial: AMBA PL011 UART driver May 17 01:47:07.155812 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 01:47:07.155819 kernel: Modules: 0 pages in range for non-PLT usage May 17 01:47:07.155827 kernel: Modules: 509024 pages in range for PLT usage May 17 01:47:07.155835 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:47:07.155844 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 01:47:07.155851 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 01:47:07.155859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 01:47:07.155867 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:47:07.155875 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 01:47:07.155883 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 01:47:07.155891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 01:47:07.155898 kernel: ACPI: Added _OSI(Module Device) May 17 01:47:07.155906 kernel: ACPI: Added _OSI(Processor Device) May 17 01:47:07.155915 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:47:07.155923 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:47:07.155930 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 01:47:07.155938 kernel: ACPI: Interpreter enabled May 17 01:47:07.155945 kernel: ACPI: Using GIC for interrupt routing May 17 01:47:07.155953 kernel: ACPI: MCFG table detected, 8 entries May 17 01:47:07.155961 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155969 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155976 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155985 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155993 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156001 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156008 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156016 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156024 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 01:47:07.156032 kernel: printk: console [ttyAMA0] enabled May 17 01:47:07.156039 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 01:47:07.156047 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 01:47:07.156192 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.156268 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.156335 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.156399 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.156463 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.156525 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 01:47:07.156538 kernel: PCI host bridge to bus 000d:00 May 17 01:47:07.156614 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 01:47:07.156673 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 01:47:07.156731 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 01:47:07.156814 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.156889 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.156957 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 01:47:07.157025 kernel: pci 000d:00:01.0: supports D1 D2 May 17 01:47:07.157091 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157181 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.157250 kernel: pci 000d:00:02.0: supports D1 D2 May 17 01:47:07.157316 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157392 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.157460 kernel: pci 000d:00:03.0: supports D1 D2 May 17 01:47:07.157526 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157599 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.157665 kernel: pci 000d:00:04.0: supports D1 D2 May 17 01:47:07.157730 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157740 kernel: acpiphp: Slot [1] registered May 17 01:47:07.157748 kernel: acpiphp: Slot [2] registered May 17 01:47:07.157756 kernel: acpiphp: Slot [3] registered May 17 01:47:07.157766 kernel: acpiphp: Slot [4] registered May 17 01:47:07.157826 kernel: pci_bus 000d:00: on NUMA node 0 May 17 01:47:07.157895 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.157961 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.158027 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.158092 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.158187 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.158255 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.158320 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.158382 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.158445 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.158512 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.158577 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.158640 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.158708 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 01:47:07.158772 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:47:07.158837 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 01:47:07.158902 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:47:07.158968 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 01:47:07.159033 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:47:07.159099 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 01:47:07.159167 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:47:07.159236 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159301 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159367 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159432 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159498 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159564 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159630 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159697 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159761 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159828 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159892 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159958 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.160023 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.160088 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.160155 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.160224 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.160288 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.160354 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 01:47:07.160418 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:47:07.160484 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.160548 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 01:47:07.160614 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:47:07.160682 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.160747 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 01:47:07.160813 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:47:07.160876 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.160942 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 01:47:07.161007 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:47:07.161069 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 01:47:07.161127 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 01:47:07.161202 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 01:47:07.161262 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:47:07.161330 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 01:47:07.161391 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:47:07.161470 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 01:47:07.161532 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:47:07.161602 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 01:47:07.161662 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:47:07.161673 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 01:47:07.161745 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.161813 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.161875 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.161939 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.162001 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 01:47:07.162064 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 01:47:07.162074 kernel: PCI host bridge to bus 0000:00 May 17 01:47:07.162142 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 01:47:07.162204 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 01:47:07.162261 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 01:47:07.162335 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.162407 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.162474 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 01:47:07.162538 kernel: pci 0000:00:01.0: supports D1 D2 May 17 01:47:07.162603 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.162679 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.162744 kernel: pci 0000:00:02.0: supports D1 D2 May 17 01:47:07.162809 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.162881 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.162948 kernel: pci 0000:00:03.0: supports D1 D2 May 17 01:47:07.163012 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.163085 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.163158 kernel: pci 0000:00:04.0: supports D1 D2 May 17 01:47:07.163224 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.163235 kernel: acpiphp: Slot [1-1] registered May 17 01:47:07.163242 kernel: acpiphp: Slot [2-1] registered May 17 01:47:07.163250 kernel: acpiphp: Slot [3-1] registered May 17 01:47:07.163258 kernel: acpiphp: Slot [4-1] registered May 17 01:47:07.163313 kernel: pci_bus 0000:00: on NUMA node 0 May 17 01:47:07.163380 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.163447 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.163513 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.163579 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.163644 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.163708 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.163774 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.163840 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.163907 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.163973 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.164037 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.164102 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.164170 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 01:47:07.164236 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:47:07.164300 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 01:47:07.164368 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:47:07.164433 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 01:47:07.164498 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:47:07.164562 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 01:47:07.164627 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:47:07.164691 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.164757 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.164822 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.164890 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.164955 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165019 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165085 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165153 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165220 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165283 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165349 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165413 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165483 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165547 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165612 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165675 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165740 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.165804 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 01:47:07.165869 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:47:07.165934 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.166001 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 01:47:07.166067 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:47:07.166135 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.166203 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 01:47:07.166267 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:47:07.166332 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.166396 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 01:47:07.166462 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:47:07.166520 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 01:47:07.166581 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 01:47:07.166649 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 01:47:07.166709 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:47:07.166777 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 01:47:07.166837 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:47:07.166913 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 01:47:07.166979 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:47:07.167046 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 01:47:07.167107 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:47:07.167117 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 01:47:07.167193 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.167257 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.167324 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.167386 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.167450 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 01:47:07.167512 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 01:47:07.167522 kernel: PCI host bridge to bus 0005:00 May 17 01:47:07.167586 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 01:47:07.167644 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 01:47:07.167704 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 01:47:07.167775 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.167852 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.167918 kernel: pci 0005:00:01.0: supports D1 D2 May 17 01:47:07.167985 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168057 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.168122 kernel: pci 0005:00:03.0: supports D1 D2 May 17 01:47:07.168194 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168267 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.168335 kernel: pci 0005:00:05.0: supports D1 D2 May 17 01:47:07.168401 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168474 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 01:47:07.168538 kernel: pci 0005:00:07.0: supports D1 D2 May 17 01:47:07.168603 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168615 kernel: acpiphp: Slot [1-2] registered May 17 01:47:07.168623 kernel: acpiphp: Slot [2-2] registered May 17 01:47:07.168696 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 01:47:07.168765 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 01:47:07.168831 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 01:47:07.168905 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 01:47:07.168973 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 01:47:07.169042 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 01:47:07.169102 kernel: pci_bus 0005:00: on NUMA node 0 May 17 01:47:07.169188 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.169256 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.169323 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.169395 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.169460 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.169533 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.169613 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.169677 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.169744 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 01:47:07.169809 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.169878 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.169942 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 01:47:07.170011 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 01:47:07.170076 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:47:07.170148 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 01:47:07.170213 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:47:07.170279 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 01:47:07.170343 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:47:07.170409 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 01:47:07.170477 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:47:07.170542 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.170607 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.170673 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.170737 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.170804 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.170870 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.170935 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171003 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171069 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171136 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171203 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171269 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171335 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171400 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171464 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171529 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171593 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.171675 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 01:47:07.171741 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:47:07.171808 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 01:47:07.171874 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 01:47:07.171939 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:47:07.172009 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 01:47:07.172078 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 01:47:07.172148 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 01:47:07.172214 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 01:47:07.172280 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:47:07.172348 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 01:47:07.172417 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 01:47:07.172481 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 01:47:07.172551 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 01:47:07.172615 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:47:07.172676 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 01:47:07.172734 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 01:47:07.172805 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 01:47:07.172867 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:47:07.172946 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 01:47:07.173008 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:47:07.173075 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 01:47:07.173139 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:47:07.173207 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 01:47:07.173274 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:47:07.173284 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 01:47:07.173363 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.173431 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.173494 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.173557 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.173622 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.173693 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 01:47:07.173704 kernel: PCI host bridge to bus 0003:00 May 17 01:47:07.173772 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 01:47:07.173834 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 01:47:07.173894 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 01:47:07.173968 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.174041 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.174108 kernel: pci 0003:00:01.0: supports D1 D2 May 17 01:47:07.174178 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.174250 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.174316 kernel: pci 0003:00:03.0: supports D1 D2 May 17 01:47:07.174380 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.174452 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.174517 kernel: pci 0003:00:05.0: supports D1 D2 May 17 01:47:07.174584 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.174594 kernel: acpiphp: Slot [1-3] registered May 17 01:47:07.174602 kernel: acpiphp: Slot [2-3] registered May 17 01:47:07.174675 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 01:47:07.174743 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 01:47:07.174811 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 01:47:07.174876 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 01:47:07.174943 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:47:07.175012 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 01:47:07.175081 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:47:07.175195 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 01:47:07.175265 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 01:47:07.175332 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 01:47:07.175405 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 01:47:07.175474 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 01:47:07.175539 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 01:47:07.175605 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 01:47:07.175670 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 01:47:07.175735 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 01:47:07.175800 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:47:07.175866 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 01:47:07.175930 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 01:47:07.175991 kernel: pci_bus 0003:00: on NUMA node 0 May 17 01:47:07.176055 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.176119 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.176186 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.176252 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.176316 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.176379 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.176447 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 01:47:07.176510 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 01:47:07.176573 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 01:47:07.176636 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:47:07.176713 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 01:47:07.176779 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:47:07.176843 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 01:47:07.176909 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:47:07.176974 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177037 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177101 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177169 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177235 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177300 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177365 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177430 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177497 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177562 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177626 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177692 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177757 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.177822 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 01:47:07.177887 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:47:07.177955 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 01:47:07.178021 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 01:47:07.178088 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:47:07.178160 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 01:47:07.178230 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 01:47:07.178298 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 01:47:07.178366 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 01:47:07.178436 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 01:47:07.178503 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 01:47:07.178571 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 01:47:07.178638 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 01:47:07.178705 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 01:47:07.178773 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.178839 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 01:47:07.178910 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.178978 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 01:47:07.179048 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.179115 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 01:47:07.179187 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.179252 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 01:47:07.179318 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 01:47:07.179386 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:47:07.179447 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:47:07.179504 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 01:47:07.179562 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 01:47:07.179642 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 01:47:07.179703 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:47:07.179774 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 01:47:07.179834 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:47:07.179902 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 01:47:07.179961 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:47:07.179972 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 01:47:07.180042 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.180105 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.180176 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.180239 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.180302 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.180365 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 01:47:07.180376 kernel: PCI host bridge to bus 000c:00 May 17 01:47:07.180441 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 01:47:07.180500 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 01:47:07.180561 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 01:47:07.180635 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.180710 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.180781 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 01:47:07.180846 kernel: pci 000c:00:01.0: supports D1 D2 May 17 01:47:07.180912 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.180984 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.181053 kernel: pci 000c:00:02.0: supports D1 D2 May 17 01:47:07.181118 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.181196 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.181261 kernel: pci 000c:00:03.0: supports D1 D2 May 17 01:47:07.181327 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.181398 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.181465 kernel: pci 000c:00:04.0: supports D1 D2 May 17 01:47:07.181532 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.181543 kernel: acpiphp: Slot [1-4] registered May 17 01:47:07.181551 kernel: acpiphp: Slot [2-4] registered May 17 01:47:07.181559 kernel: acpiphp: Slot [3-2] registered May 17 01:47:07.181567 kernel: acpiphp: Slot [4-2] registered May 17 01:47:07.181625 kernel: pci_bus 000c:00: on NUMA node 0 May 17 01:47:07.181693 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.181758 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.181827 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.181892 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.181958 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.182023 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.182089 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.182158 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.182224 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.182292 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.182358 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.182423 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.182489 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 01:47:07.182554 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:47:07.182619 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 01:47:07.182684 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:47:07.182752 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 01:47:07.182817 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:47:07.182882 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 01:47:07.182951 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:47:07.183016 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183082 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183149 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183216 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183282 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183348 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183412 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183479 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183543 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183609 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183673 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183739 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183807 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183872 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183937 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.184003 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.184068 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.184136 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 01:47:07.184202 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:47:07.184267 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.184335 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 01:47:07.184404 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:47:07.184470 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.184534 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 01:47:07.184600 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:47:07.184665 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.184732 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 01:47:07.184797 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:47:07.184857 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 01:47:07.184914 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 01:47:07.184986 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 01:47:07.185048 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:47:07.185127 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 01:47:07.185195 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:47:07.185262 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 01:47:07.185324 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:47:07.185391 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 01:47:07.185452 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:47:07.185462 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 01:47:07.185536 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.185602 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.185665 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.185728 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.185791 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.185854 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 01:47:07.185864 kernel: PCI host bridge to bus 0002:00 May 17 01:47:07.185934 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 01:47:07.185993 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 01:47:07.186051 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 01:47:07.186123 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.186201 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.186267 kernel: pci 0002:00:01.0: supports D1 D2 May 17 01:47:07.186335 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186405 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.186472 kernel: pci 0002:00:03.0: supports D1 D2 May 17 01:47:07.186536 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186608 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.186673 kernel: pci 0002:00:05.0: supports D1 D2 May 17 01:47:07.186738 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186814 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 01:47:07.186885 kernel: pci 0002:00:07.0: supports D1 D2 May 17 01:47:07.186953 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186963 kernel: acpiphp: Slot [1-5] registered May 17 01:47:07.186972 kernel: acpiphp: Slot [2-5] registered May 17 01:47:07.186980 kernel: acpiphp: Slot [3-3] registered May 17 01:47:07.186988 kernel: acpiphp: Slot [4-3] registered May 17 01:47:07.187045 kernel: pci_bus 0002:00: on NUMA node 0 May 17 01:47:07.187116 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.187420 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.187491 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.187562 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.187627 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.187693 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.187759 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.187823 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.187887 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.187953 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.188016 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.188081 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.188152 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 01:47:07.188218 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:47:07.188281 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 01:47:07.188345 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:47:07.188409 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 01:47:07.188473 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:47:07.188537 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 01:47:07.188617 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:47:07.188683 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.188746 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.188811 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.188874 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.188939 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189001 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189065 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189134 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189199 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189263 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189326 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189390 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189454 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189517 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189581 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189646 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189712 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.189776 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 01:47:07.189842 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:47:07.189906 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 01:47:07.189970 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 01:47:07.190034 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:47:07.190098 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 01:47:07.190168 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 01:47:07.190232 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:47:07.190297 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 01:47:07.190361 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 01:47:07.190427 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:47:07.190486 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 01:47:07.190546 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 01:47:07.190616 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 01:47:07.190677 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:47:07.190744 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 01:47:07.190805 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:47:07.190881 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 01:47:07.190943 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:47:07.191010 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 01:47:07.191070 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:47:07.191081 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 01:47:07.191212 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.191279 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.191342 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.191407 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.191468 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 01:47:07.191529 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 01:47:07.191540 kernel: PCI host bridge to bus 0001:00 May 17 01:47:07.191603 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 01:47:07.191669 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 01:47:07.191729 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 01:47:07.191802 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.191874 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.191938 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 01:47:07.192002 kernel: pci 0001:00:01.0: supports D1 D2 May 17 01:47:07.192066 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192142 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.192208 kernel: pci 0001:00:02.0: supports D1 D2 May 17 01:47:07.192271 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192344 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.192410 kernel: pci 0001:00:03.0: supports D1 D2 May 17 01:47:07.192473 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192544 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.192612 kernel: pci 0001:00:04.0: supports D1 D2 May 17 01:47:07.192677 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192688 kernel: acpiphp: Slot [1-6] registered May 17 01:47:07.192759 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:47:07.192826 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 01:47:07.192892 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 01:47:07.192958 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 01:47:07.193024 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:47:07.193099 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:47:07.193174 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 01:47:07.193240 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 01:47:07.193306 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 01:47:07.193317 kernel: acpiphp: Slot [2-6] registered May 17 01:47:07.193325 kernel: acpiphp: Slot [3-4] registered May 17 01:47:07.193333 kernel: acpiphp: Slot [4-4] registered May 17 01:47:07.193391 kernel: pci_bus 0001:00: on NUMA node 0 May 17 01:47:07.193456 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.193521 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.193586 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.193653 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.193718 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.193782 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.193846 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.193914 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.193978 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.194043 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.194107 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 01:47:07.194175 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 01:47:07.194239 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 01:47:07.194306 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 01:47:07.194370 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 01:47:07.194435 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 01:47:07.194499 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 01:47:07.194563 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 01:47:07.194628 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.194691 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.194755 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.194821 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.194886 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.194949 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195014 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195077 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195341 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195421 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195487 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195551 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195619 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195684 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195749 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195813 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195880 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 01:47:07.195947 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 01:47:07.196013 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 01:47:07.196078 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 01:47:07.196149 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.196214 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 01:47:07.196278 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 01:47:07.196342 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.196406 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 01:47:07.196469 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 01:47:07.196536 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.196600 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 01:47:07.196664 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 01:47:07.196729 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.196792 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 01:47:07.196856 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 01:47:07.196917 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 01:47:07.196974 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 01:47:07.197051 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 01:47:07.197112 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 01:47:07.197182 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 01:47:07.197242 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 01:47:07.197307 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 01:47:07.197370 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 01:47:07.197435 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 01:47:07.197495 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 01:47:07.197505 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 01:47:07.197574 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.197637 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.197703 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.197766 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.197828 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 01:47:07.197890 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 01:47:07.197901 kernel: PCI host bridge to bus 0004:00 May 17 01:47:07.197964 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 01:47:07.198021 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 01:47:07.198081 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 01:47:07.198155 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.198228 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.198293 kernel: pci 0004:00:01.0: supports D1 D2 May 17 01:47:07.198357 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.198427 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.198493 kernel: pci 0004:00:03.0: supports D1 D2 May 17 01:47:07.198559 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.198630 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.198695 kernel: pci 0004:00:05.0: supports D1 D2 May 17 01:47:07.198759 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.198833 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:47:07.198899 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 01:47:07.198969 kernel: pci 0004:01:00.0: supports D1 D2 May 17 01:47:07.199034 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:47:07.199111 kernel: pci_bus 0004:02: extended config space not accessible May 17 01:47:07.199190 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:47:07.199260 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 01:47:07.199330 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 01:47:07.199397 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 01:47:07.199466 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 01:47:07.199536 kernel: pci 0004:02:00.0: supports D1 D2 May 17 01:47:07.199604 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:47:07.199677 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 01:47:07.199744 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 01:47:07.199810 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:47:07.199871 kernel: pci_bus 0004:00: on NUMA node 0 May 17 01:47:07.199937 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 01:47:07.200004 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.200069 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.200136 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 01:47:07.200202 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.200266 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.200330 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.200395 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 01:47:07.200461 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:47:07.200526 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 01:47:07.200590 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:47:07.200655 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 01:47:07.200718 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:47:07.200783 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.200846 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.200913 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.200976 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201040 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201103 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201171 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201235 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201300 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201364 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201428 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201493 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201560 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 01:47:07.201627 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201692 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201764 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 01:47:07.201832 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 01:47:07.201901 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 01:47:07.201970 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 01:47:07.202038 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 01:47:07.202104 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 01:47:07.202173 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 01:47:07.202238 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 01:47:07.202302 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:47:07.202369 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 01:47:07.202433 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.202497 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 01:47:07.202564 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:47:07.202629 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 01:47:07.202692 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 01:47:07.202757 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:47:07.202816 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:47:07.202872 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 01:47:07.202932 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 01:47:07.203001 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 01:47:07.203061 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:47:07.203124 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 01:47:07.203195 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 01:47:07.203254 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:47:07.203324 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 01:47:07.203383 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:47:07.203393 kernel: iommu: Default domain type: Translated May 17 01:47:07.203402 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 01:47:07.203410 kernel: efivars: Registered efivars operations May 17 01:47:07.203477 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 17 01:47:07.203546 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 01:47:07.203615 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 01:47:07.203628 kernel: vgaarb: loaded May 17 01:47:07.203636 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 01:47:07.203645 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:47:07.203653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:47:07.203661 kernel: pnp: PnP ACPI init May 17 01:47:07.203730 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 01:47:07.203790 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 01:47:07.203851 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 01:47:07.203909 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 01:47:07.203968 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 01:47:07.204026 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 01:47:07.204086 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 01:47:07.204147 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 01:47:07.204158 kernel: pnp: PnP ACPI: found 1 devices May 17 01:47:07.204168 kernel: NET: Registered PF_INET protocol family May 17 01:47:07.204177 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204185 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 01:47:07.204193 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:47:07.204202 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 01:47:07.204210 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204218 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 01:47:07.204227 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204235 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:47:07.204312 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 01:47:07.204323 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 01:47:07.204331 kernel: kvm [1]: GICv3: no GICV resource entry May 17 01:47:07.204340 kernel: kvm [1]: disabling GICv2 emulation May 17 01:47:07.204348 kernel: kvm [1]: GIC system register CPU interface enabled May 17 01:47:07.204358 kernel: kvm [1]: vgic interrupt IRQ9 May 17 01:47:07.204366 kernel: kvm [1]: VHE mode initialized successfully May 17 01:47:07.204374 kernel: Initialise system trusted keyrings May 17 01:47:07.204383 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 01:47:07.204392 kernel: Key type asymmetric registered May 17 01:47:07.204399 kernel: Asymmetric key parser 'x509' registered May 17 01:47:07.204408 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 01:47:07.204416 kernel: io scheduler mq-deadline registered May 17 01:47:07.204424 kernel: io scheduler kyber registered May 17 01:47:07.204432 kernel: io scheduler bfq registered May 17 01:47:07.204440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 01:47:07.204448 kernel: ACPI: button: Power Button [PWRB] May 17 01:47:07.204458 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 01:47:07.204466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:47:07.204539 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 01:47:07.204601 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.204661 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.204722 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 01:47:07.204782 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 17 01:47:07.204843 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 17 01:47:07.204912 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 01:47:07.204972 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.205032 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.205091 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 01:47:07.205154 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 17 01:47:07.205216 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 17 01:47:07.205287 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 01:47:07.205347 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.205407 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.205466 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 01:47:07.205526 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 17 01:47:07.205585 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 17 01:47:07.205654 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 01:47:07.205714 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.205774 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.205835 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 01:47:07.205894 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 17 01:47:07.205954 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 17 01:47:07.206029 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 01:47:07.206093 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.206158 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.206219 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 01:47:07.206279 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 17 01:47:07.206339 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 17 01:47:07.206406 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 01:47:07.206470 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.206529 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.206590 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 01:47:07.206649 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 17 01:47:07.206709 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 17 01:47:07.206778 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 01:47:07.206838 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.206901 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.206961 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 01:47:07.207021 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 17 01:47:07.207080 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 17 01:47:07.207152 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 01:47:07.207212 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.207276 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.207335 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 01:47:07.207396 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 17 01:47:07.207458 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 17 01:47:07.207469 kernel: thunder_xcv, ver 1.0 May 17 01:47:07.207477 kernel: thunder_bgx, ver 1.0 May 17 01:47:07.207485 kernel: nicpf, ver 1.0 May 17 01:47:07.207495 kernel: nicvf, ver 1.0 May 17 01:47:07.207562 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 01:47:07.207622 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T01:47:05 UTC (1747446425) May 17 01:47:07.207633 kernel: efifb: probing for efifb May 17 01:47:07.207641 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 01:47:07.207649 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 01:47:07.207658 kernel: efifb: scrolling: redraw May 17 01:47:07.207666 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 01:47:07.207674 kernel: Console: switching to colour frame buffer device 100x37 May 17 01:47:07.207684 kernel: fb0: EFI VGA frame buffer device May 17 01:47:07.207692 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 01:47:07.207700 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:47:07.207708 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 01:47:07.207717 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 01:47:07.207725 kernel: watchdog: Hard watchdog permanently disabled May 17 01:47:07.207733 kernel: NET: Registered PF_INET6 protocol family May 17 01:47:07.207741 kernel: Segment Routing with IPv6 May 17 01:47:07.207749 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:47:07.207759 kernel: NET: Registered PF_PACKET protocol family May 17 01:47:07.207767 kernel: Key type dns_resolver registered May 17 01:47:07.207775 kernel: registered taskstats version 1 May 17 01:47:07.207782 kernel: Loading compiled-in X.509 certificates May 17 01:47:07.207791 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 01:47:07.207799 kernel: Key type .fscrypt registered May 17 01:47:07.207807 kernel: Key type fscrypt-provisioning registered May 17 01:47:07.207815 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 01:47:07.207823 kernel: ima: Allocated hash algorithm: sha1 May 17 01:47:07.207833 kernel: ima: No architecture policies found May 17 01:47:07.207841 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 01:47:07.207909 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 01:47:07.207974 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 01:47:07.208041 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 01:47:07.208107 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 01:47:07.208176 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 01:47:07.208241 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 01:47:07.208310 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 01:47:07.208375 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 01:47:07.208441 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 01:47:07.208506 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 01:47:07.208572 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 01:47:07.208637 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 01:47:07.208703 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 01:47:07.208768 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 01:47:07.208834 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 01:47:07.208901 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 01:47:07.208967 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 01:47:07.209031 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 01:47:07.209097 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 01:47:07.209165 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 01:47:07.209231 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 01:47:07.209295 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 01:47:07.209361 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 01:47:07.209428 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 01:47:07.209494 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 01:47:07.209558 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 01:47:07.209625 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 01:47:07.209689 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 01:47:07.209758 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 01:47:07.209822 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 01:47:07.209887 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 01:47:07.209956 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 01:47:07.210021 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 01:47:07.210086 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 01:47:07.210155 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 01:47:07.210221 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 01:47:07.210286 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 01:47:07.210351 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 01:47:07.210416 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 01:47:07.210480 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 01:47:07.210548 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 01:47:07.210613 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 01:47:07.210678 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 01:47:07.210742 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 01:47:07.210807 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 01:47:07.210872 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 01:47:07.210938 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 01:47:07.211003 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 01:47:07.211070 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 01:47:07.211138 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 01:47:07.211205 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 01:47:07.211269 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 01:47:07.211335 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 01:47:07.211398 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 01:47:07.211464 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 01:47:07.211528 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 01:47:07.211596 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 01:47:07.211660 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 01:47:07.211725 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 01:47:07.211789 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 01:47:07.211858 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 01:47:07.211869 kernel: clk: Disabling unused clocks May 17 01:47:07.211877 kernel: Freeing unused kernel memory: 39424K May 17 01:47:07.211885 kernel: Run /init as init process May 17 01:47:07.211895 kernel: with arguments: May 17 01:47:07.211903 kernel: /init May 17 01:47:07.211911 kernel: with environment: May 17 01:47:07.211919 kernel: HOME=/ May 17 01:47:07.211927 kernel: TERM=linux May 17 01:47:07.211934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:47:07.211945 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 01:47:07.211955 systemd[1]: Detected architecture arm64. May 17 01:47:07.211965 systemd[1]: Running in initrd. May 17 01:47:07.211973 systemd[1]: No hostname configured, using default hostname. May 17 01:47:07.211981 systemd[1]: Hostname set to . May 17 01:47:07.211989 systemd[1]: Initializing machine ID from random generator. May 17 01:47:07.211998 systemd[1]: Queued start job for default target initrd.target. May 17 01:47:07.212007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:47:07.212015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:47:07.212024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 01:47:07.212034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 01:47:07.212043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 01:47:07.212052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 01:47:07.212061 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 01:47:07.212070 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 01:47:07.212079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:47:07.212089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 01:47:07.212097 systemd[1]: Reached target paths.target - Path Units. May 17 01:47:07.212106 systemd[1]: Reached target slices.target - Slice Units. May 17 01:47:07.212116 systemd[1]: Reached target swap.target - Swaps. May 17 01:47:07.212124 systemd[1]: Reached target timers.target - Timer Units. May 17 01:47:07.212136 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:47:07.212144 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:47:07.212153 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 01:47:07.212161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 01:47:07.212172 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 01:47:07.212180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 01:47:07.212189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:47:07.212197 systemd[1]: Reached target sockets.target - Socket Units. May 17 01:47:07.212206 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 01:47:07.212214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 01:47:07.212223 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 01:47:07.212231 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:47:07.212240 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 01:47:07.212250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 01:47:07.212281 systemd-journald[898]: Collecting audit messages is disabled. May 17 01:47:07.212301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:07.212311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 01:47:07.212320 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:47:07.212328 kernel: Bridge firewalling registered May 17 01:47:07.212337 systemd-journald[898]: Journal started May 17 01:47:07.212356 systemd-journald[898]: Runtime Journal (/run/log/journal/420f97750a9c4d609642c4a7b5194bd3) is 8.0M, max 4.0G, 3.9G free. May 17 01:47:07.169768 systemd-modules-load[900]: Inserted module 'overlay' May 17 01:47:07.244079 systemd[1]: Started systemd-journald.service - Journal Service. May 17 01:47:07.191846 systemd-modules-load[900]: Inserted module 'br_netfilter' May 17 01:47:07.249614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:47:07.260373 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:47:07.271093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 01:47:07.281670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:07.309305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:47:07.315332 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 01:47:07.332478 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 01:47:07.354362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 01:47:07.372156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:07.388796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 01:47:07.395740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 01:47:07.406967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:47:07.439281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 01:47:07.452362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 01:47:07.460499 dracut-cmdline[939]: dracut-dracut-053 May 17 01:47:07.471554 dracut-cmdline[939]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 01:47:07.465866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 01:47:07.479719 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:47:07.488235 systemd-resolved[945]: Positive Trust Anchors: May 17 01:47:07.488244 systemd-resolved[945]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:47:07.488275 systemd-resolved[945]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 01:47:07.503188 systemd-resolved[945]: Defaulting to hostname 'linux'. May 17 01:47:07.516510 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 01:47:07.535540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 01:47:07.637864 kernel: SCSI subsystem initialized May 17 01:47:07.649141 kernel: Loading iSCSI transport class v2.0-870. May 17 01:47:07.668141 kernel: iscsi: registered transport (tcp) May 17 01:47:07.690144 kernel: iscsi: registered transport (qla4xxx) May 17 01:47:07.690170 kernel: QLogic iSCSI HBA Driver May 17 01:47:07.739605 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 01:47:07.761295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 01:47:07.807438 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:47:07.807470 kernel: device-mapper: uevent: version 1.0.3 May 17 01:47:07.817035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 01:47:07.883143 kernel: raid6: neonx8 gen() 15846 MB/s May 17 01:47:07.908142 kernel: raid6: neonx4 gen() 15714 MB/s May 17 01:47:07.933142 kernel: raid6: neonx2 gen() 13341 MB/s May 17 01:47:07.958141 kernel: raid6: neonx1 gen() 10530 MB/s May 17 01:47:07.983142 kernel: raid6: int64x8 gen() 6991 MB/s May 17 01:47:08.008142 kernel: raid6: int64x4 gen() 7384 MB/s May 17 01:47:08.033138 kernel: raid6: int64x2 gen() 6150 MB/s May 17 01:47:08.061077 kernel: raid6: int64x1 gen() 5077 MB/s May 17 01:47:08.061100 kernel: raid6: using algorithm neonx8 gen() 15846 MB/s May 17 01:47:08.095762 kernel: raid6: .... xor() 11961 MB/s, rmw enabled May 17 01:47:08.095783 kernel: raid6: using neon recovery algorithm May 17 01:47:08.115144 kernel: xor: measuring software checksum speed May 17 01:47:08.123138 kernel: 8regs : 19052 MB/sec May 17 01:47:08.134372 kernel: 32regs : 19422 MB/sec May 17 01:47:08.134392 kernel: arm64_neon : 27213 MB/sec May 17 01:47:08.141991 kernel: xor: using function: arm64_neon (27213 MB/sec) May 17 01:47:08.203144 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 01:47:08.214195 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 01:47:08.232310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:47:08.245227 systemd-udevd[1133]: Using default interface naming scheme 'v255'. May 17 01:47:08.248276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:47:08.263271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 01:47:08.277520 dracut-pre-trigger[1145]: rd.md=0: removing MD RAID activation May 17 01:47:08.305193 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:47:08.326315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 01:47:08.428020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:47:08.456743 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:47:08.456787 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:47:08.475264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 01:47:08.521844 kernel: ACPI: bus type USB registered May 17 01:47:08.521865 kernel: usbcore: registered new interface driver usbfs May 17 01:47:08.521875 kernel: usbcore: registered new interface driver hub May 17 01:47:08.521886 kernel: PTP clock support registered May 17 01:47:08.521895 kernel: usbcore: registered new device driver usb May 17 01:47:08.517048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 01:47:08.674604 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:47:08.674625 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:47:08.674635 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 17 01:47:08.674802 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 17 01:47:08.674897 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 01:47:08.674978 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 01:47:08.675058 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 01:47:08.675143 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 May 17 01:47:08.675235 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 01:47:08.675320 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 17 01:47:08.675407 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:47:08.675485 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 17 01:47:08.675571 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:5b:a4 May 17 01:47:08.672733 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:47:08.680136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:47:08.696150 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 01:47:08.746124 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 01:47:08.746271 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 01:47:08.746358 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 17 01:47:08.713339 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 01:47:08.761609 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:47:08.761669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:08.778452 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:47:08.789308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:47:08.789350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:08.806420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:08.824225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:08.836474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 01:47:09.040038 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 17 01:47:09.040198 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 01:47:09.040282 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 01:47:09.040359 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 01:47:09.040440 kernel: hub 1-0:1.0: USB hub found May 17 01:47:09.040540 kernel: hub 1-0:1.0: 4 ports detected May 17 01:47:09.040618 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 17 01:47:09.040707 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:47:09.040786 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 01:47:09.040878 kernel: hub 2-0:1.0: USB hub found May 17 01:47:09.040964 kernel: hub 2-0:1.0: 4 ports detected May 17 01:47:09.041042 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 01:47:09.041130 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 01:47:09.041220 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 01:47:09.041290 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 01:47:09.031540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:09.056266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:47:09.098146 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 01:47:09.098350 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 01:47:09.108138 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 01:47:09.113404 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 01:47:09.124842 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:5b:a5 May 17 01:47:09.136402 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 01:47:09.145895 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 01:47:09.172261 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:47:09.172319 kernel: GPT:9289727 != 1875385007 May 17 01:47:09.172340 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:47:09.172360 kernel: GPT:9289727 != 1875385007 May 17 01:47:09.172378 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:47:09.172397 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:47:09.176882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:09.293641 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 01:47:09.293798 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 01:47:09.293895 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1202) May 17 01:47:09.293906 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (1218) May 17 01:47:09.293917 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 01:47:09.247821 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 17 01:47:09.302704 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 17 01:47:09.335456 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 01:47:09.316221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 01:47:09.344303 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 01:47:09.356995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 01:47:09.387237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 01:47:09.414863 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:47:09.414884 disk-uuid[1303]: Primary Header is updated. May 17 01:47:09.414884 disk-uuid[1303]: Secondary Entries is updated. May 17 01:47:09.414884 disk-uuid[1303]: Secondary Header is updated. May 17 01:47:09.451228 kernel: hub 1-3:1.0: USB hub found May 17 01:47:09.451392 kernel: hub 1-3:1.0: 4 ports detected May 17 01:47:09.539142 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 01:47:09.576402 kernel: hub 2-3:1.0: USB hub found May 17 01:47:09.576690 kernel: hub 2-3:1.0: 4 ports detected May 17 01:47:09.673156 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:47:09.686137 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 01:47:09.709264 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 17 01:47:09.709430 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:47:10.056155 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 01:47:10.366144 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:47:10.380138 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 01:47:10.399140 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 01:47:10.412641 disk-uuid[1304]: The operation has completed successfully. May 17 01:47:10.418428 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:47:10.465026 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:47:10.465113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 01:47:10.495282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 01:47:10.505475 sh[1483]: Success May 17 01:47:10.524141 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 01:47:10.557416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 01:47:10.578255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 01:47:10.588969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 01:47:10.623040 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 01:47:10.623074 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:10.640380 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 01:47:10.654382 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 01:47:10.665799 kernel: BTRFS info (device dm-0): using free space tree May 17 01:47:10.686145 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 01:47:10.686544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 01:47:10.696730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 01:47:10.709291 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 01:47:10.715273 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 01:47:10.827180 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:10.827203 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:10.827219 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:47:10.827229 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:47:10.827241 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 01:47:10.827251 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:10.823189 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 01:47:10.850308 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 01:47:10.860949 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:47:10.892273 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 01:47:10.912019 systemd-networkd[1683]: lo: Link UP May 17 01:47:10.912025 systemd-networkd[1683]: lo: Gained carrier May 17 01:47:10.915599 systemd-networkd[1683]: Enumeration completed May 17 01:47:10.915708 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 01:47:10.917176 systemd-networkd[1683]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:10.940438 ignition[1672]: Ignition 2.19.0 May 17 01:47:10.923955 systemd[1]: Reached target network.target - Network. May 17 01:47:10.940444 ignition[1672]: Stage: fetch-offline May 17 01:47:10.949758 unknown[1672]: fetched base config from "system" May 17 01:47:10.940523 ignition[1672]: no configs at "/usr/lib/ignition/base.d" May 17 01:47:10.949765 unknown[1672]: fetched user config from "system" May 17 01:47:10.940531 ignition[1672]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:10.952462 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:47:10.940879 ignition[1672]: parsed url from cmdline: "" May 17 01:47:10.967135 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:47:10.940882 ignition[1672]: no config URL provided May 17 01:47:10.968637 systemd-networkd[1683]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:10.940886 ignition[1672]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:47:10.977271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 01:47:10.940940 ignition[1672]: parsing config with SHA512: 48bdeef7bc0348126e334dda4df7e44fd422dbd9714c6ac62cb1a3316dce390d6fe28b073b409c0dbce38a5b645be2c37f13445b9877c86d55830204b98a6925 May 17 01:47:11.019697 systemd-networkd[1683]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:10.950245 ignition[1672]: fetch-offline: fetch-offline passed May 17 01:47:10.950250 ignition[1672]: POST message to Packet Timeline May 17 01:47:10.950256 ignition[1672]: POST Status error: resource requires networking May 17 01:47:10.950318 ignition[1672]: Ignition finished successfully May 17 01:47:11.003756 ignition[1709]: Ignition 2.19.0 May 17 01:47:11.003763 ignition[1709]: Stage: kargs May 17 01:47:11.003929 ignition[1709]: no configs at "/usr/lib/ignition/base.d" May 17 01:47:11.003938 ignition[1709]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:11.005057 ignition[1709]: kargs: kargs passed May 17 01:47:11.005062 ignition[1709]: POST message to Packet Timeline May 17 01:47:11.005074 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:11.007744 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60840->[::1]:53: read: connection refused May 17 01:47:11.207861 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:47:11.208309 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49792->[::1]:53: read: connection refused May 17 01:47:11.599149 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 01:47:11.602276 systemd-networkd[1683]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:11.609100 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:47:11.609542 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43443->[::1]:53: read: connection refused May 17 01:47:12.235148 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 01:47:12.238031 systemd-networkd[1683]: eno1: Link UP May 17 01:47:12.238167 systemd-networkd[1683]: eno2: Link UP May 17 01:47:12.238292 systemd-networkd[1683]: enP1p1s0f0np0: Link UP May 17 01:47:12.238435 systemd-networkd[1683]: enP1p1s0f0np0: Gained carrier May 17 01:47:12.245282 systemd-networkd[1683]: enP1p1s0f1np1: Link UP May 17 01:47:12.284180 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv4 address 147.28.150.2/30, gateway 147.28.150.1 acquired from 147.28.144.140 May 17 01:47:12.410639 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:47:12.411247 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47826->[::1]:53: read: connection refused May 17 01:47:12.605494 systemd-networkd[1683]: enP1p1s0f1np1: Gained carrier May 17 01:47:13.277356 systemd-networkd[1683]: enP1p1s0f0np0: Gained IPv6LL May 17 01:47:14.012085 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:47:14.012875 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57994->[::1]:53: read: connection refused May 17 01:47:14.045316 systemd-networkd[1683]: enP1p1s0f1np1: Gained IPv6LL May 17 01:47:17.216098 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:47:17.719746 ignition[1709]: GET result: OK May 17 01:47:17.995607 ignition[1709]: Ignition finished successfully May 17 01:47:17.998243 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 01:47:18.019252 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 01:47:18.034574 ignition[1731]: Ignition 2.19.0 May 17 01:47:18.034581 ignition[1731]: Stage: disks May 17 01:47:18.034794 ignition[1731]: no configs at "/usr/lib/ignition/base.d" May 17 01:47:18.034803 ignition[1731]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:18.036248 ignition[1731]: disks: disks passed May 17 01:47:18.036253 ignition[1731]: POST message to Packet Timeline May 17 01:47:18.036267 ignition[1731]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:18.934120 ignition[1731]: GET result: OK May 17 01:47:19.348695 ignition[1731]: Ignition finished successfully May 17 01:47:19.352231 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 01:47:19.357535 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 01:47:19.365151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 01:47:19.373210 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 01:47:19.381854 systemd[1]: Reached target sysinit.target - System Initialization. May 17 01:47:19.390802 systemd[1]: Reached target basic.target - Basic System. May 17 01:47:19.411280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 01:47:19.426457 systemd-fsck[1750]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 01:47:19.430010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 01:47:19.452217 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 01:47:19.520140 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 01:47:19.520520 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 01:47:19.530685 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 01:47:19.552211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:47:19.644168 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1761) May 17 01:47:19.644186 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:19.644197 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:19.644207 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:47:19.644217 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:47:19.644227 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 01:47:19.558277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 01:47:19.653723 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 01:47:19.660947 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 01:47:19.676494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:47:19.676538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:47:19.689717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:47:19.703216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 01:47:19.724783 coreos-metadata[1781]: May 17 01:47:19.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:47:19.737725 coreos-metadata[1782]: May 17 01:47:19.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:47:19.727241 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 01:47:19.759867 initrd-setup-root[1811]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:47:19.765904 initrd-setup-root[1819]: cut: /sysroot/etc/group: No such file or directory May 17 01:47:19.772277 initrd-setup-root[1827]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:47:19.778373 initrd-setup-root[1835]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:47:19.848983 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 01:47:19.872205 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 01:47:19.880139 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:19.903306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 01:47:19.914305 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 01:47:19.928932 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 01:47:19.934407 ignition[1912]: INFO : Ignition 2.19.0 May 17 01:47:19.934407 ignition[1912]: INFO : Stage: mount May 17 01:47:19.934407 ignition[1912]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:47:19.934407 ignition[1912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:19.934407 ignition[1912]: INFO : mount: mount passed May 17 01:47:19.934407 ignition[1912]: INFO : POST message to Packet Timeline May 17 01:47:19.934407 ignition[1912]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:20.207981 coreos-metadata[1781]: May 17 01:47:20.207 INFO Fetch successful May 17 01:47:20.214460 coreos-metadata[1782]: May 17 01:47:20.214 INFO Fetch successful May 17 01:47:20.252698 coreos-metadata[1781]: May 17 01:47:20.252 INFO wrote hostname ci-4081.3.3-n-a9b446c9a0 to /sysroot/etc/hostname May 17 01:47:20.255767 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:47:20.266796 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:47:20.266894 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 01:47:20.567763 ignition[1912]: INFO : GET result: OK May 17 01:47:20.860752 ignition[1912]: INFO : Ignition finished successfully May 17 01:47:20.863061 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 01:47:20.883192 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 01:47:20.894965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:47:20.929432 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1939) May 17 01:47:20.929469 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:20.943682 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:20.956529 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:47:20.979049 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:47:20.979070 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 01:47:20.987198 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:47:21.018433 ignition[1958]: INFO : Ignition 2.19.0 May 17 01:47:21.018433 ignition[1958]: INFO : Stage: files May 17 01:47:21.027479 ignition[1958]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:47:21.027479 ignition[1958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:21.027479 ignition[1958]: DEBUG : files: compiled without relabeling support, skipping May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:47:21.027479 ignition[1958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 01:47:21.023894 unknown[1958]: wrote ssh authorized keys file for user: core May 17 01:47:21.138274 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 01:47:21.194265 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 01:47:21.631398 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 01:47:21.948046 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.948046 ignition[1958]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:47:21.972524 ignition[1958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:47:21.972524 ignition[1958]: INFO : files: files passed May 17 01:47:21.972524 ignition[1958]: INFO : POST message to Packet Timeline May 17 01:47:21.972524 ignition[1958]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:22.629516 ignition[1958]: INFO : GET result: OK May 17 01:47:22.928164 ignition[1958]: INFO : Ignition finished successfully May 17 01:47:22.931205 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 01:47:22.949267 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 01:47:22.955894 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 01:47:22.967472 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:47:22.967547 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 01:47:23.002207 initrd-setup-root-after-ignition[1997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:47:23.002207 initrd-setup-root-after-ignition[1997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 01:47:22.985563 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:47:23.047676 initrd-setup-root-after-ignition[2001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:47:22.998054 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 01:47:23.018325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 01:47:23.050152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:47:23.050228 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 01:47:23.064599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 01:47:23.075593 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 01:47:23.092316 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 01:47:23.107239 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 01:47:23.130654 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:47:23.168250 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 01:47:23.182359 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 01:47:23.191421 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:47:23.202697 systemd[1]: Stopped target timers.target - Timer Units. May 17 01:47:23.214025 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:47:23.214126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:47:23.225428 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 01:47:23.236376 systemd[1]: Stopped target basic.target - Basic System. May 17 01:47:23.247572 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 01:47:23.258748 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:47:23.269753 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 01:47:23.280722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 01:47:23.291720 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:47:23.302745 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 01:47:23.313695 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 01:47:23.330167 systemd[1]: Stopped target swap.target - Swaps. May 17 01:47:23.341253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:47:23.341351 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 01:47:23.352681 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 01:47:23.363605 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:47:23.374747 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 01:47:23.378174 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:47:23.385934 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:47:23.386031 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 01:47:23.397198 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:47:23.397337 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:47:23.408319 systemd[1]: Stopped target paths.target - Path Units. May 17 01:47:23.419349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:47:23.423156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:47:23.436224 systemd[1]: Stopped target slices.target - Slice Units. May 17 01:47:23.447567 systemd[1]: Stopped target sockets.target - Socket Units. May 17 01:47:23.458935 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:47:23.459064 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:47:23.470377 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:47:23.569588 ignition[2023]: INFO : Ignition 2.19.0 May 17 01:47:23.569588 ignition[2023]: INFO : Stage: umount May 17 01:47:23.569588 ignition[2023]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:47:23.569588 ignition[2023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:23.569588 ignition[2023]: INFO : umount: umount passed May 17 01:47:23.569588 ignition[2023]: INFO : POST message to Packet Timeline May 17 01:47:23.569588 ignition[2023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:23.470472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:47:23.481893 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:47:23.481981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:47:23.493328 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:47:23.493413 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 01:47:23.504742 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:47:23.504825 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:47:23.532273 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 01:47:23.539454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:47:23.539557 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:47:23.552486 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 01:47:23.563723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:47:23.563830 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:47:23.575363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:47:23.575449 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:47:23.588841 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:47:23.590917 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:47:23.591003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 01:47:23.630610 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:47:23.630859 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 01:47:24.058802 ignition[2023]: INFO : GET result: OK May 17 01:47:24.390655 ignition[2023]: INFO : Ignition finished successfully May 17 01:47:24.393938 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:47:24.394234 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 01:47:24.400896 systemd[1]: Stopped target network.target - Network. May 17 01:47:24.409825 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:47:24.409878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 01:47:24.419382 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:47:24.419414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 01:47:24.428875 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:47:24.428930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 01:47:24.438397 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 01:47:24.438457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 01:47:24.448138 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:47:24.448166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 01:47:24.457967 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 01:47:24.463153 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv6 lease lost May 17 01:47:24.467416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 01:47:24.471240 systemd-networkd[1683]: enP1p1s0f1np1: DHCPv6 lease lost May 17 01:47:24.478670 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:47:24.478945 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 01:47:24.493969 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:47:24.494608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 01:47:24.502742 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:47:24.502879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 01:47:24.524259 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 01:47:24.530858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:47:24.530906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:47:24.540780 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:47:24.540813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 01:47:24.550645 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:47:24.550676 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 01:47:24.560603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 01:47:24.560632 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:47:24.570904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:47:24.595458 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:47:24.595587 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:47:24.604256 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:47:24.604426 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 01:47:24.613124 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:47:24.613158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:47:24.623598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:47:24.623637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 01:47:24.644593 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:47:24.644632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 01:47:24.655284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:47:24.655341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:24.677322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 01:47:24.693303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 01:47:24.693362 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:47:24.709576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:47:24.709604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:24.721549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:47:24.721641 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 01:47:25.259026 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:47:25.259199 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 01:47:25.270322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 01:47:25.293244 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 01:47:25.301707 systemd[1]: Switching root. May 17 01:47:25.359298 systemd-journald[898]: Journal stopped May 17 01:47:07.150481 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 01:47:07.150503 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 01:47:07.150512 kernel: KASLR enabled May 17 01:47:07.150517 kernel: efi: EFI v2.7 by American Megatrends May 17 01:47:07.150523 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea464818 RNG=0xebf00018 MEMRESERVE=0xe4663f98 May 17 01:47:07.150529 kernel: random: crng init done May 17 01:47:07.150536 kernel: esrt: Reserving ESRT space from 0x00000000ea464818 to 0x00000000ea464878. May 17 01:47:07.150542 kernel: ACPI: Early table checksum verification disabled May 17 01:47:07.150550 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 01:47:07.150556 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 01:47:07.150562 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 01:47:07.150568 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 01:47:07.150574 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 01:47:07.150580 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 01:47:07.150589 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 01:47:07.150595 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:47:07.150602 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 01:47:07.150608 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:47:07.150615 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 01:47:07.150621 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150628 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150634 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150640 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 01:47:07.150648 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 01:47:07.150655 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 01:47:07.150661 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:47:07.150667 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 01:47:07.150674 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 01:47:07.150680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 01:47:07.150687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 01:47:07.150693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 01:47:07.150700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 01:47:07.150706 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 17 01:47:07.150712 kernel: Zone ranges: May 17 01:47:07.150719 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 01:47:07.150726 kernel: DMA32 empty May 17 01:47:07.150733 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 01:47:07.150739 kernel: Movable zone start for each node May 17 01:47:07.150745 kernel: Early memory node ranges May 17 01:47:07.150752 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 01:47:07.150761 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 01:47:07.150768 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 01:47:07.150776 kernel: node 0: [mem 0x0000000094000000-0x00000000eba36fff] May 17 01:47:07.150782 kernel: node 0: [mem 0x00000000eba37000-0x00000000ebeadfff] May 17 01:47:07.150789 kernel: node 0: [mem 0x00000000ebeae000-0x00000000ebeaefff] May 17 01:47:07.150796 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 17 01:47:07.150802 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 01:47:07.150809 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 01:47:07.150815 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 01:47:07.150822 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 01:47:07.150829 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 01:47:07.150835 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 01:47:07.150844 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 01:47:07.150850 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 01:47:07.150857 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 01:47:07.150864 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 01:47:07.150870 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 01:47:07.150877 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 01:47:07.150884 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 01:47:07.150891 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 01:47:07.150897 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 01:47:07.150904 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 01:47:07.150911 kernel: psci: probing for conduit method from ACPI. May 17 01:47:07.150919 kernel: psci: PSCIv1.1 detected in firmware. May 17 01:47:07.150926 kernel: psci: Using standard PSCI v0.2 function IDs May 17 01:47:07.150932 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 01:47:07.150939 kernel: psci: SMC Calling Convention v1.2 May 17 01:47:07.150946 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 01:47:07.150952 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 01:47:07.150959 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 01:47:07.150966 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 01:47:07.150973 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 01:47:07.150979 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 01:47:07.150986 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 01:47:07.150993 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 01:47:07.151001 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 01:47:07.151007 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 01:47:07.151014 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 01:47:07.151021 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 01:47:07.151027 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 01:47:07.151034 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 01:47:07.151041 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 01:47:07.151047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 01:47:07.151054 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 01:47:07.151061 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 01:47:07.151067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 01:47:07.151074 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 01:47:07.151082 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 01:47:07.151089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 01:47:07.151095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 01:47:07.151102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 01:47:07.151109 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 01:47:07.151115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 01:47:07.151122 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 01:47:07.151128 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 01:47:07.151172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 01:47:07.151179 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 01:47:07.151186 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 01:47:07.151194 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 01:47:07.151201 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 01:47:07.151208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 01:47:07.151215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 01:47:07.151221 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 01:47:07.151228 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 01:47:07.151235 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 01:47:07.151241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 01:47:07.151248 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 01:47:07.151255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 01:47:07.151262 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 01:47:07.151268 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 01:47:07.151277 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 01:47:07.151283 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 01:47:07.151290 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 01:47:07.151297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 01:47:07.151303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 01:47:07.151310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 01:47:07.151317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 01:47:07.151323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 01:47:07.151337 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 01:47:07.151344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 01:47:07.151353 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 01:47:07.151360 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 01:47:07.151367 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 01:47:07.151374 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 01:47:07.151381 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 01:47:07.151389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 01:47:07.151397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 01:47:07.151404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 01:47:07.151412 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 01:47:07.151419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 01:47:07.151426 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 01:47:07.151433 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 01:47:07.151440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 01:47:07.151447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 01:47:07.151454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 01:47:07.151461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 01:47:07.151468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 01:47:07.151476 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 01:47:07.151484 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 01:47:07.151491 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 01:47:07.151498 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 01:47:07.151505 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 01:47:07.151512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 01:47:07.151519 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 01:47:07.151526 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 01:47:07.151534 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 01:47:07.151541 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 01:47:07.151548 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 01:47:07.151555 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 01:47:07.151564 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 01:47:07.151571 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 01:47:07.151578 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 01:47:07.151585 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 01:47:07.151592 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 01:47:07.151599 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 01:47:07.151606 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 01:47:07.151613 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 01:47:07.151620 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 01:47:07.151627 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 01:47:07.151634 kernel: Detected PIPT I-cache on CPU0 May 17 01:47:07.151643 kernel: CPU features: detected: GIC system register CPU interface May 17 01:47:07.151650 kernel: CPU features: detected: Virtualization Host Extensions May 17 01:47:07.151657 kernel: CPU features: detected: Hardware dirty bit management May 17 01:47:07.151665 kernel: CPU features: detected: Spectre-v4 May 17 01:47:07.151672 kernel: CPU features: detected: Spectre-BHB May 17 01:47:07.151679 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 01:47:07.151686 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 01:47:07.151694 kernel: CPU features: detected: ARM erratum 1418040 May 17 01:47:07.151701 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 01:47:07.151708 kernel: alternatives: applying boot alternatives May 17 01:47:07.151717 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 01:47:07.151725 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:47:07.151733 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 01:47:07.151740 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 01:47:07.151747 kernel: printk: log_buf_len min size: 262144 bytes May 17 01:47:07.151754 kernel: printk: log_buf_len: 1048576 bytes May 17 01:47:07.151761 kernel: printk: early log buf free: 249904(95%) May 17 01:47:07.151769 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 01:47:07.151776 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 01:47:07.151783 kernel: Fallback order for Node 0: 0 May 17 01:47:07.151790 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 01:47:07.151797 kernel: Policy zone: Normal May 17 01:47:07.151806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:47:07.151813 kernel: software IO TLB: area num 128. May 17 01:47:07.151820 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 01:47:07.151827 kernel: Memory: 262922456K/268174336K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 5251880K reserved, 0K cma-reserved) May 17 01:47:07.151835 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 01:47:07.151842 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 01:47:07.151850 kernel: rcu: RCU event tracing is enabled. May 17 01:47:07.151857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 01:47:07.151864 kernel: Trampoline variant of Tasks RCU enabled. May 17 01:47:07.151872 kernel: Tracing variant of Tasks RCU enabled. May 17 01:47:07.151879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:47:07.151887 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 01:47:07.151895 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 01:47:07.151902 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 01:47:07.151909 kernel: GICv3: 672 SPIs implemented May 17 01:47:07.151916 kernel: GICv3: 0 Extended SPIs implemented May 17 01:47:07.151923 kernel: Root IRQ handler: gic_handle_irq May 17 01:47:07.151930 kernel: GICv3: GICv3 features: 16 PPIs May 17 01:47:07.151937 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 01:47:07.151944 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 01:47:07.151952 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 01:47:07.151959 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 01:47:07.151966 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 01:47:07.151973 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 01:47:07.151981 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 01:47:07.151988 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 01:47:07.151995 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 01:47:07.152002 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 01:47:07.152010 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152017 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152024 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 01:47:07.152031 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152039 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152046 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 01:47:07.152054 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152062 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152069 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 01:47:07.152077 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152084 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152091 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 01:47:07.152099 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152106 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152113 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 01:47:07.152120 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152127 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152137 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 01:47:07.152146 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152153 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152160 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 01:47:07.152168 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:47:07.152175 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 17 01:47:07.152182 kernel: GICv3: using LPI property table @0x00000800003e0000 May 17 01:47:07.152189 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 17 01:47:07.152197 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 01:47:07.152204 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152211 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 01:47:07.152218 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 01:47:07.152227 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 01:47:07.152234 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 01:47:07.152241 kernel: Console: colour dummy device 80x25 May 17 01:47:07.152249 kernel: printk: console [tty0] enabled May 17 01:47:07.152256 kernel: ACPI: Core revision 20230628 May 17 01:47:07.152263 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 01:47:07.152271 kernel: pid_max: default: 81920 minimum: 640 May 17 01:47:07.152278 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 01:47:07.152285 kernel: landlock: Up and running. May 17 01:47:07.152293 kernel: SELinux: Initializing. May 17 01:47:07.152302 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:47:07.152309 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:47:07.152316 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 01:47:07.152324 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 01:47:07.152331 kernel: rcu: Hierarchical SRCU implementation. May 17 01:47:07.152339 kernel: rcu: Max phase no-delay instances is 400. May 17 01:47:07.152346 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 01:47:07.152353 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 01:47:07.152360 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 01:47:07.152369 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 01:47:07.152376 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 01:47:07.152383 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 01:47:07.152390 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 01:47:07.152397 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 01:47:07.152405 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 01:47:07.152412 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 01:47:07.152419 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 01:47:07.152426 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 01:47:07.152435 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 01:47:07.152442 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 01:47:07.152449 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 01:47:07.152456 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 01:47:07.152463 kernel: Remapping and enabling EFI services. May 17 01:47:07.152471 kernel: smp: Bringing up secondary CPUs ... May 17 01:47:07.152478 kernel: Detected PIPT I-cache on CPU1 May 17 01:47:07.152485 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 01:47:07.152493 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 17 01:47:07.152502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152509 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 01:47:07.152516 kernel: Detected PIPT I-cache on CPU2 May 17 01:47:07.152524 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 01:47:07.152531 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 17 01:47:07.152538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152545 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 01:47:07.152552 kernel: Detected PIPT I-cache on CPU3 May 17 01:47:07.152560 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 01:47:07.152567 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 17 01:47:07.152576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152583 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 01:47:07.152590 kernel: Detected PIPT I-cache on CPU4 May 17 01:47:07.152597 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 01:47:07.152605 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 17 01:47:07.152612 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152619 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 01:47:07.152626 kernel: Detected PIPT I-cache on CPU5 May 17 01:47:07.152633 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 01:47:07.152642 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 17 01:47:07.152649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152657 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 01:47:07.152664 kernel: Detected PIPT I-cache on CPU6 May 17 01:47:07.152671 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 01:47:07.152679 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 17 01:47:07.152686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152693 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 01:47:07.152700 kernel: Detected PIPT I-cache on CPU7 May 17 01:47:07.152708 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 01:47:07.152716 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 17 01:47:07.152724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152731 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 01:47:07.152738 kernel: Detected PIPT I-cache on CPU8 May 17 01:47:07.152746 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 01:47:07.152753 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 17 01:47:07.152760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152767 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 01:47:07.152774 kernel: Detected PIPT I-cache on CPU9 May 17 01:47:07.152782 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 01:47:07.152790 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 17 01:47:07.152797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152805 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 01:47:07.152812 kernel: Detected PIPT I-cache on CPU10 May 17 01:47:07.152819 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 01:47:07.152826 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 17 01:47:07.152834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152841 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 01:47:07.152848 kernel: Detected PIPT I-cache on CPU11 May 17 01:47:07.152857 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 01:47:07.152864 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 17 01:47:07.152872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152879 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 01:47:07.152886 kernel: Detected PIPT I-cache on CPU12 May 17 01:47:07.152893 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 01:47:07.152901 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 17 01:47:07.152908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152915 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 01:47:07.152923 kernel: Detected PIPT I-cache on CPU13 May 17 01:47:07.152932 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 01:47:07.152939 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 17 01:47:07.152946 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152954 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 01:47:07.152961 kernel: Detected PIPT I-cache on CPU14 May 17 01:47:07.152968 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 01:47:07.152976 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 17 01:47:07.152983 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.152990 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 01:47:07.152999 kernel: Detected PIPT I-cache on CPU15 May 17 01:47:07.153006 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 01:47:07.153013 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 17 01:47:07.153021 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153028 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 01:47:07.153035 kernel: Detected PIPT I-cache on CPU16 May 17 01:47:07.153043 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 01:47:07.153050 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 17 01:47:07.153057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153074 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 01:47:07.153083 kernel: Detected PIPT I-cache on CPU17 May 17 01:47:07.153090 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 01:47:07.153098 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 17 01:47:07.153105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153113 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 01:47:07.153121 kernel: Detected PIPT I-cache on CPU18 May 17 01:47:07.153128 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 01:47:07.153139 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 17 01:47:07.153148 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153156 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 01:47:07.153164 kernel: Detected PIPT I-cache on CPU19 May 17 01:47:07.153171 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 01:47:07.153179 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 17 01:47:07.153186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153194 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 01:47:07.153203 kernel: Detected PIPT I-cache on CPU20 May 17 01:47:07.153211 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 01:47:07.153219 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 17 01:47:07.153226 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153234 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 01:47:07.153242 kernel: Detected PIPT I-cache on CPU21 May 17 01:47:07.153251 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 01:47:07.153259 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 17 01:47:07.153266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153275 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 01:47:07.153283 kernel: Detected PIPT I-cache on CPU22 May 17 01:47:07.153290 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 01:47:07.153298 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 17 01:47:07.153306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153314 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 01:47:07.153321 kernel: Detected PIPT I-cache on CPU23 May 17 01:47:07.153329 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 01:47:07.153336 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 17 01:47:07.153346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153354 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 01:47:07.153361 kernel: Detected PIPT I-cache on CPU24 May 17 01:47:07.153369 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 01:47:07.153377 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 17 01:47:07.153384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153392 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 01:47:07.153401 kernel: Detected PIPT I-cache on CPU25 May 17 01:47:07.153409 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 01:47:07.153416 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 17 01:47:07.153425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153433 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 01:47:07.153441 kernel: Detected PIPT I-cache on CPU26 May 17 01:47:07.153448 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 01:47:07.153456 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 17 01:47:07.153464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153471 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 01:47:07.153479 kernel: Detected PIPT I-cache on CPU27 May 17 01:47:07.153487 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 01:47:07.153496 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 17 01:47:07.153503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153511 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 01:47:07.153519 kernel: Detected PIPT I-cache on CPU28 May 17 01:47:07.153526 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 01:47:07.153534 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 17 01:47:07.153542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153550 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 01:47:07.153557 kernel: Detected PIPT I-cache on CPU29 May 17 01:47:07.153565 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 01:47:07.153574 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 17 01:47:07.153582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153590 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 01:47:07.153597 kernel: Detected PIPT I-cache on CPU30 May 17 01:47:07.153605 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 01:47:07.153613 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 17 01:47:07.153621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153628 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 01:47:07.153636 kernel: Detected PIPT I-cache on CPU31 May 17 01:47:07.153645 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 01:47:07.153653 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 17 01:47:07.153660 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153668 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 01:47:07.153676 kernel: Detected PIPT I-cache on CPU32 May 17 01:47:07.153683 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 01:47:07.153691 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 17 01:47:07.153698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153706 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 01:47:07.153715 kernel: Detected PIPT I-cache on CPU33 May 17 01:47:07.153723 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 01:47:07.153731 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 17 01:47:07.153738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153746 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 01:47:07.153753 kernel: Detected PIPT I-cache on CPU34 May 17 01:47:07.153761 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 01:47:07.153769 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 17 01:47:07.153776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153784 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 01:47:07.153793 kernel: Detected PIPT I-cache on CPU35 May 17 01:47:07.153801 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 01:47:07.153808 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 17 01:47:07.153816 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153823 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 01:47:07.153831 kernel: Detected PIPT I-cache on CPU36 May 17 01:47:07.153839 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 01:47:07.153846 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 17 01:47:07.153854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153863 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 01:47:07.153871 kernel: Detected PIPT I-cache on CPU37 May 17 01:47:07.153878 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 01:47:07.153886 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 17 01:47:07.153894 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153901 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 01:47:07.153909 kernel: Detected PIPT I-cache on CPU38 May 17 01:47:07.153916 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 01:47:07.153925 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 17 01:47:07.153933 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153942 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 01:47:07.153950 kernel: Detected PIPT I-cache on CPU39 May 17 01:47:07.153957 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 01:47:07.153965 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 17 01:47:07.153973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.153981 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 01:47:07.153988 kernel: Detected PIPT I-cache on CPU40 May 17 01:47:07.153996 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 01:47:07.154005 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 17 01:47:07.154013 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154020 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 01:47:07.154028 kernel: Detected PIPT I-cache on CPU41 May 17 01:47:07.154036 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 01:47:07.154043 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 17 01:47:07.154051 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154059 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 01:47:07.154066 kernel: Detected PIPT I-cache on CPU42 May 17 01:47:07.154075 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 01:47:07.154083 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 17 01:47:07.154091 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154098 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 01:47:07.154106 kernel: Detected PIPT I-cache on CPU43 May 17 01:47:07.154113 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 01:47:07.154121 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 17 01:47:07.154129 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154139 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 01:47:07.154146 kernel: Detected PIPT I-cache on CPU44 May 17 01:47:07.154156 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 01:47:07.154164 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 17 01:47:07.154171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154179 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 01:47:07.154186 kernel: Detected PIPT I-cache on CPU45 May 17 01:47:07.154194 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 01:47:07.154202 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 17 01:47:07.154210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154218 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 01:47:07.154227 kernel: Detected PIPT I-cache on CPU46 May 17 01:47:07.154234 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 01:47:07.154242 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 17 01:47:07.154250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154257 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 01:47:07.154265 kernel: Detected PIPT I-cache on CPU47 May 17 01:47:07.154272 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 01:47:07.154280 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 17 01:47:07.154288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154295 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 01:47:07.154304 kernel: Detected PIPT I-cache on CPU48 May 17 01:47:07.154312 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 01:47:07.154319 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 17 01:47:07.154327 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154335 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 01:47:07.154342 kernel: Detected PIPT I-cache on CPU49 May 17 01:47:07.154350 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 01:47:07.154358 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 17 01:47:07.154365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154374 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 01:47:07.154382 kernel: Detected PIPT I-cache on CPU50 May 17 01:47:07.154391 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 01:47:07.154399 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 17 01:47:07.154406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154414 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 01:47:07.154421 kernel: Detected PIPT I-cache on CPU51 May 17 01:47:07.154429 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 01:47:07.154437 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 17 01:47:07.154446 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154454 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 01:47:07.154461 kernel: Detected PIPT I-cache on CPU52 May 17 01:47:07.154469 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 01:47:07.154477 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 17 01:47:07.154485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154492 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 01:47:07.154500 kernel: Detected PIPT I-cache on CPU53 May 17 01:47:07.154508 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 01:47:07.154515 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 17 01:47:07.154525 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154532 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 01:47:07.154540 kernel: Detected PIPT I-cache on CPU54 May 17 01:47:07.154548 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 01:47:07.154555 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 17 01:47:07.154563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154571 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 01:47:07.154578 kernel: Detected PIPT I-cache on CPU55 May 17 01:47:07.154586 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 01:47:07.154595 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 17 01:47:07.154603 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154610 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 01:47:07.154618 kernel: Detected PIPT I-cache on CPU56 May 17 01:47:07.154625 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 01:47:07.154633 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 17 01:47:07.154641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154648 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 01:47:07.154657 kernel: Detected PIPT I-cache on CPU57 May 17 01:47:07.154665 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 01:47:07.154674 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 17 01:47:07.154682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154689 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 01:47:07.154697 kernel: Detected PIPT I-cache on CPU58 May 17 01:47:07.154704 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 01:47:07.154712 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 17 01:47:07.154720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154728 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 01:47:07.154736 kernel: Detected PIPT I-cache on CPU59 May 17 01:47:07.154745 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 01:47:07.154752 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 17 01:47:07.154760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154768 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 01:47:07.154775 kernel: Detected PIPT I-cache on CPU60 May 17 01:47:07.154783 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 01:47:07.154791 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 17 01:47:07.154799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154806 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 01:47:07.154814 kernel: Detected PIPT I-cache on CPU61 May 17 01:47:07.154823 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 01:47:07.154831 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 17 01:47:07.154839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154846 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 01:47:07.154854 kernel: Detected PIPT I-cache on CPU62 May 17 01:47:07.154861 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 01:47:07.154869 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 17 01:47:07.154877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154884 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 01:47:07.154893 kernel: Detected PIPT I-cache on CPU63 May 17 01:47:07.154901 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 01:47:07.154909 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 17 01:47:07.154917 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154924 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 01:47:07.154932 kernel: Detected PIPT I-cache on CPU64 May 17 01:47:07.154940 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 01:47:07.154947 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 17 01:47:07.154955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.154963 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 01:47:07.154971 kernel: Detected PIPT I-cache on CPU65 May 17 01:47:07.154979 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 01:47:07.154987 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 17 01:47:07.154995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155002 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 01:47:07.155010 kernel: Detected PIPT I-cache on CPU66 May 17 01:47:07.155018 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 01:47:07.155025 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 17 01:47:07.155033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155042 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 01:47:07.155050 kernel: Detected PIPT I-cache on CPU67 May 17 01:47:07.155058 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 01:47:07.155066 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 17 01:47:07.155073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155081 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 01:47:07.155088 kernel: Detected PIPT I-cache on CPU68 May 17 01:47:07.155096 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 01:47:07.155104 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 17 01:47:07.155113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155120 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 01:47:07.155128 kernel: Detected PIPT I-cache on CPU69 May 17 01:47:07.155138 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 01:47:07.155146 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 17 01:47:07.155153 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155161 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 01:47:07.155169 kernel: Detected PIPT I-cache on CPU70 May 17 01:47:07.155176 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 01:47:07.155184 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 17 01:47:07.155193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155201 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 01:47:07.155208 kernel: Detected PIPT I-cache on CPU71 May 17 01:47:07.155216 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 01:47:07.155224 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 17 01:47:07.155231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155239 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 01:47:07.155247 kernel: Detected PIPT I-cache on CPU72 May 17 01:47:07.155254 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 01:47:07.155264 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 17 01:47:07.155271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155279 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 01:47:07.155286 kernel: Detected PIPT I-cache on CPU73 May 17 01:47:07.155294 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 01:47:07.155302 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 17 01:47:07.155309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155317 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 01:47:07.155325 kernel: Detected PIPT I-cache on CPU74 May 17 01:47:07.155332 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 01:47:07.155342 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 17 01:47:07.155349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155357 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 01:47:07.155365 kernel: Detected PIPT I-cache on CPU75 May 17 01:47:07.155372 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 01:47:07.155380 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 17 01:47:07.155388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155395 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 01:47:07.155403 kernel: Detected PIPT I-cache on CPU76 May 17 01:47:07.155412 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 01:47:07.155420 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 17 01:47:07.155427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155435 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 01:47:07.155443 kernel: Detected PIPT I-cache on CPU77 May 17 01:47:07.155450 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 01:47:07.155458 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 17 01:47:07.155466 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155473 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 01:47:07.155481 kernel: Detected PIPT I-cache on CPU78 May 17 01:47:07.155490 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 01:47:07.155498 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 17 01:47:07.155506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155513 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 01:47:07.155521 kernel: Detected PIPT I-cache on CPU79 May 17 01:47:07.155529 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 01:47:07.155536 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 17 01:47:07.155544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:47:07.155552 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 01:47:07.155561 kernel: smp: Brought up 1 node, 80 CPUs May 17 01:47:07.155568 kernel: SMP: Total of 80 processors activated. May 17 01:47:07.155576 kernel: CPU features: detected: 32-bit EL0 Support May 17 01:47:07.155584 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 01:47:07.155592 kernel: CPU features: detected: Common not Private translations May 17 01:47:07.155599 kernel: CPU features: detected: CRC32 instructions May 17 01:47:07.155607 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 01:47:07.155615 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 01:47:07.155623 kernel: CPU features: detected: LSE atomic instructions May 17 01:47:07.155632 kernel: CPU features: detected: Privileged Access Never May 17 01:47:07.155639 kernel: CPU features: detected: RAS Extension Support May 17 01:47:07.155647 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 01:47:07.155654 kernel: CPU: All CPU(s) started at EL2 May 17 01:47:07.155662 kernel: alternatives: applying system-wide alternatives May 17 01:47:07.155670 kernel: devtmpfs: initialized May 17 01:47:07.155678 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:47:07.155685 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 01:47:07.155693 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:47:07.155702 kernel: SMBIOS 3.4.0 present. May 17 01:47:07.155710 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 01:47:07.155718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:47:07.155725 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 01:47:07.155733 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 01:47:07.155741 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 01:47:07.155748 kernel: audit: initializing netlink subsys (disabled) May 17 01:47:07.155756 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 17 01:47:07.155764 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:47:07.155773 kernel: cpuidle: using governor menu May 17 01:47:07.155781 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 01:47:07.155788 kernel: ASID allocator initialised with 32768 entries May 17 01:47:07.155796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:47:07.155804 kernel: Serial: AMBA PL011 UART driver May 17 01:47:07.155812 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 01:47:07.155819 kernel: Modules: 0 pages in range for non-PLT usage May 17 01:47:07.155827 kernel: Modules: 509024 pages in range for PLT usage May 17 01:47:07.155835 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:47:07.155844 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 01:47:07.155851 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 01:47:07.155859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 01:47:07.155867 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:47:07.155875 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 01:47:07.155883 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 01:47:07.155891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 01:47:07.155898 kernel: ACPI: Added _OSI(Module Device) May 17 01:47:07.155906 kernel: ACPI: Added _OSI(Processor Device) May 17 01:47:07.155915 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:47:07.155923 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:47:07.155930 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 01:47:07.155938 kernel: ACPI: Interpreter enabled May 17 01:47:07.155945 kernel: ACPI: Using GIC for interrupt routing May 17 01:47:07.155953 kernel: ACPI: MCFG table detected, 8 entries May 17 01:47:07.155961 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155969 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155976 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155985 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.155993 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156001 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156008 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156016 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 01:47:07.156024 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 01:47:07.156032 kernel: printk: console [ttyAMA0] enabled May 17 01:47:07.156039 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 01:47:07.156047 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 01:47:07.156192 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.156268 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.156335 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.156399 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.156463 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.156525 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 01:47:07.156538 kernel: PCI host bridge to bus 000d:00 May 17 01:47:07.156614 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 01:47:07.156673 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 01:47:07.156731 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 01:47:07.156814 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.156889 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.156957 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 01:47:07.157025 kernel: pci 000d:00:01.0: supports D1 D2 May 17 01:47:07.157091 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157181 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.157250 kernel: pci 000d:00:02.0: supports D1 D2 May 17 01:47:07.157316 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157392 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.157460 kernel: pci 000d:00:03.0: supports D1 D2 May 17 01:47:07.157526 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157599 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.157665 kernel: pci 000d:00:04.0: supports D1 D2 May 17 01:47:07.157730 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.157740 kernel: acpiphp: Slot [1] registered May 17 01:47:07.157748 kernel: acpiphp: Slot [2] registered May 17 01:47:07.157756 kernel: acpiphp: Slot [3] registered May 17 01:47:07.157766 kernel: acpiphp: Slot [4] registered May 17 01:47:07.157826 kernel: pci_bus 000d:00: on NUMA node 0 May 17 01:47:07.157895 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.157961 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.158027 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.158092 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.158187 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.158255 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.158320 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.158382 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.158445 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.158512 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.158577 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.158640 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.158708 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 01:47:07.158772 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:47:07.158837 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 01:47:07.158902 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:47:07.158968 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 01:47:07.159033 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:47:07.159099 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 01:47:07.159167 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:47:07.159236 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159301 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159367 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159432 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159498 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159564 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159630 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159697 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159761 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159828 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.159892 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.159958 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.160023 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.160088 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.160155 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.160224 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.160288 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.160354 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 01:47:07.160418 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:47:07.160484 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.160548 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 01:47:07.160614 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:47:07.160682 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.160747 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 01:47:07.160813 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:47:07.160876 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.160942 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 01:47:07.161007 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:47:07.161069 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 01:47:07.161127 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 01:47:07.161202 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 01:47:07.161262 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:47:07.161330 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 01:47:07.161391 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:47:07.161470 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 01:47:07.161532 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:47:07.161602 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 01:47:07.161662 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:47:07.161673 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 01:47:07.161745 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.161813 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.161875 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.161939 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.162001 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 01:47:07.162064 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 01:47:07.162074 kernel: PCI host bridge to bus 0000:00 May 17 01:47:07.162142 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 01:47:07.162204 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 01:47:07.162261 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 01:47:07.162335 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.162407 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.162474 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 01:47:07.162538 kernel: pci 0000:00:01.0: supports D1 D2 May 17 01:47:07.162603 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.162679 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.162744 kernel: pci 0000:00:02.0: supports D1 D2 May 17 01:47:07.162809 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.162881 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.162948 kernel: pci 0000:00:03.0: supports D1 D2 May 17 01:47:07.163012 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.163085 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.163158 kernel: pci 0000:00:04.0: supports D1 D2 May 17 01:47:07.163224 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.163235 kernel: acpiphp: Slot [1-1] registered May 17 01:47:07.163242 kernel: acpiphp: Slot [2-1] registered May 17 01:47:07.163250 kernel: acpiphp: Slot [3-1] registered May 17 01:47:07.163258 kernel: acpiphp: Slot [4-1] registered May 17 01:47:07.163313 kernel: pci_bus 0000:00: on NUMA node 0 May 17 01:47:07.163380 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.163447 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.163513 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.163579 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.163644 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.163708 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.163774 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.163840 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.163907 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.163973 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.164037 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.164102 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.164170 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 01:47:07.164236 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:47:07.164300 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 01:47:07.164368 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:47:07.164433 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 01:47:07.164498 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:47:07.164562 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 01:47:07.164627 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:47:07.164691 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.164757 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.164822 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.164890 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.164955 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165019 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165085 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165153 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165220 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165283 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165349 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165413 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165483 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165547 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165612 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.165675 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.165740 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.165804 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 01:47:07.165869 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:47:07.165934 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.166001 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 01:47:07.166067 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:47:07.166135 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.166203 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 01:47:07.166267 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:47:07.166332 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.166396 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 01:47:07.166462 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:47:07.166520 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 01:47:07.166581 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 01:47:07.166649 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 01:47:07.166709 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:47:07.166777 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 01:47:07.166837 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:47:07.166913 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 01:47:07.166979 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:47:07.167046 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 01:47:07.167107 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:47:07.167117 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 01:47:07.167193 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.167257 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.167324 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.167386 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.167450 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 01:47:07.167512 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 01:47:07.167522 kernel: PCI host bridge to bus 0005:00 May 17 01:47:07.167586 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 01:47:07.167644 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 01:47:07.167704 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 01:47:07.167775 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.167852 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.167918 kernel: pci 0005:00:01.0: supports D1 D2 May 17 01:47:07.167985 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168057 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.168122 kernel: pci 0005:00:03.0: supports D1 D2 May 17 01:47:07.168194 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168267 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.168335 kernel: pci 0005:00:05.0: supports D1 D2 May 17 01:47:07.168401 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168474 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 01:47:07.168538 kernel: pci 0005:00:07.0: supports D1 D2 May 17 01:47:07.168603 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 01:47:07.168615 kernel: acpiphp: Slot [1-2] registered May 17 01:47:07.168623 kernel: acpiphp: Slot [2-2] registered May 17 01:47:07.168696 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 01:47:07.168765 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 01:47:07.168831 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 01:47:07.168905 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 01:47:07.168973 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 01:47:07.169042 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 01:47:07.169102 kernel: pci_bus 0005:00: on NUMA node 0 May 17 01:47:07.169188 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.169256 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.169323 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.169395 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.169460 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.169533 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.169613 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.169677 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.169744 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 01:47:07.169809 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.169878 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.169942 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 01:47:07.170011 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 01:47:07.170076 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:47:07.170148 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 01:47:07.170213 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:47:07.170279 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 01:47:07.170343 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:47:07.170409 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 01:47:07.170477 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:47:07.170542 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.170607 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.170673 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.170737 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.170804 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.170870 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.170935 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171003 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171069 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171136 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171203 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171269 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171335 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171400 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171464 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.171529 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.171593 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.171675 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 01:47:07.171741 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:47:07.171808 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 01:47:07.171874 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 01:47:07.171939 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:47:07.172009 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 01:47:07.172078 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 01:47:07.172148 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 01:47:07.172214 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 01:47:07.172280 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:47:07.172348 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 01:47:07.172417 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 01:47:07.172481 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 01:47:07.172551 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 01:47:07.172615 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:47:07.172676 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 01:47:07.172734 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 01:47:07.172805 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 01:47:07.172867 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:47:07.172946 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 01:47:07.173008 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:47:07.173075 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 01:47:07.173139 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:47:07.173207 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 01:47:07.173274 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:47:07.173284 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 01:47:07.173363 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.173431 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.173494 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.173557 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.173622 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.173693 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 01:47:07.173704 kernel: PCI host bridge to bus 0003:00 May 17 01:47:07.173772 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 01:47:07.173834 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 01:47:07.173894 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 01:47:07.173968 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.174041 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.174108 kernel: pci 0003:00:01.0: supports D1 D2 May 17 01:47:07.174178 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.174250 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.174316 kernel: pci 0003:00:03.0: supports D1 D2 May 17 01:47:07.174380 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.174452 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.174517 kernel: pci 0003:00:05.0: supports D1 D2 May 17 01:47:07.174584 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.174594 kernel: acpiphp: Slot [1-3] registered May 17 01:47:07.174602 kernel: acpiphp: Slot [2-3] registered May 17 01:47:07.174675 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 01:47:07.174743 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 01:47:07.174811 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 01:47:07.174876 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 01:47:07.174943 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:47:07.175012 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 01:47:07.175081 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:47:07.175195 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 01:47:07.175265 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 01:47:07.175332 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 01:47:07.175405 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 01:47:07.175474 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 01:47:07.175539 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 01:47:07.175605 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 01:47:07.175670 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 01:47:07.175735 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 01:47:07.175800 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:47:07.175866 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 01:47:07.175930 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 01:47:07.175991 kernel: pci_bus 0003:00: on NUMA node 0 May 17 01:47:07.176055 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.176119 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.176186 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.176252 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.176316 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.176379 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.176447 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 01:47:07.176510 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 01:47:07.176573 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 01:47:07.176636 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:47:07.176713 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 01:47:07.176779 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:47:07.176843 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 01:47:07.176909 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:47:07.176974 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177037 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177101 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177169 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177235 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177300 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177365 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177430 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177497 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177562 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177626 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.177692 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.177757 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.177822 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 01:47:07.177887 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:47:07.177955 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 01:47:07.178021 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 01:47:07.178088 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:47:07.178160 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 01:47:07.178230 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 01:47:07.178298 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 01:47:07.178366 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 01:47:07.178436 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 01:47:07.178503 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 01:47:07.178571 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 01:47:07.178638 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 01:47:07.178705 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 01:47:07.178773 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.178839 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 01:47:07.178910 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.178978 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 01:47:07.179048 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.179115 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 01:47:07.179187 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 01:47:07.179252 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 01:47:07.179318 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 01:47:07.179386 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:47:07.179447 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:47:07.179504 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 01:47:07.179562 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 01:47:07.179642 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 01:47:07.179703 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:47:07.179774 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 01:47:07.179834 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:47:07.179902 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 01:47:07.179961 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:47:07.179972 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 01:47:07.180042 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.180105 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.180176 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.180239 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.180302 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.180365 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 01:47:07.180376 kernel: PCI host bridge to bus 000c:00 May 17 01:47:07.180441 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 01:47:07.180500 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 01:47:07.180561 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 01:47:07.180635 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.180710 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.180781 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 01:47:07.180846 kernel: pci 000c:00:01.0: supports D1 D2 May 17 01:47:07.180912 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.180984 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.181053 kernel: pci 000c:00:02.0: supports D1 D2 May 17 01:47:07.181118 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.181196 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.181261 kernel: pci 000c:00:03.0: supports D1 D2 May 17 01:47:07.181327 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.181398 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.181465 kernel: pci 000c:00:04.0: supports D1 D2 May 17 01:47:07.181532 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.181543 kernel: acpiphp: Slot [1-4] registered May 17 01:47:07.181551 kernel: acpiphp: Slot [2-4] registered May 17 01:47:07.181559 kernel: acpiphp: Slot [3-2] registered May 17 01:47:07.181567 kernel: acpiphp: Slot [4-2] registered May 17 01:47:07.181625 kernel: pci_bus 000c:00: on NUMA node 0 May 17 01:47:07.181693 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.181758 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.181827 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.181892 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.181958 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.182023 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.182089 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.182158 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.182224 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.182292 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.182358 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.182423 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.182489 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 01:47:07.182554 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:47:07.182619 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 01:47:07.182684 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:47:07.182752 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 01:47:07.182817 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:47:07.182882 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 01:47:07.182951 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:47:07.183016 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183082 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183149 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183216 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183282 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183348 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183412 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183479 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183543 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183609 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183673 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183739 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183807 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.183872 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.183937 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.184003 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.184068 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.184136 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 01:47:07.184202 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:47:07.184267 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.184335 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 01:47:07.184404 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:47:07.184470 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.184534 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 01:47:07.184600 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:47:07.184665 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.184732 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 01:47:07.184797 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:47:07.184857 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 01:47:07.184914 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 01:47:07.184986 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 01:47:07.185048 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:47:07.185127 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 01:47:07.185195 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:47:07.185262 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 01:47:07.185324 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:47:07.185391 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 01:47:07.185452 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:47:07.185462 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 01:47:07.185536 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.185602 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.185665 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.185728 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.185791 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 01:47:07.185854 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 01:47:07.185864 kernel: PCI host bridge to bus 0002:00 May 17 01:47:07.185934 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 01:47:07.185993 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 01:47:07.186051 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 01:47:07.186123 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.186201 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.186267 kernel: pci 0002:00:01.0: supports D1 D2 May 17 01:47:07.186335 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186405 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.186472 kernel: pci 0002:00:03.0: supports D1 D2 May 17 01:47:07.186536 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186608 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.186673 kernel: pci 0002:00:05.0: supports D1 D2 May 17 01:47:07.186738 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186814 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 01:47:07.186885 kernel: pci 0002:00:07.0: supports D1 D2 May 17 01:47:07.186953 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 01:47:07.186963 kernel: acpiphp: Slot [1-5] registered May 17 01:47:07.186972 kernel: acpiphp: Slot [2-5] registered May 17 01:47:07.186980 kernel: acpiphp: Slot [3-3] registered May 17 01:47:07.186988 kernel: acpiphp: Slot [4-3] registered May 17 01:47:07.187045 kernel: pci_bus 0002:00: on NUMA node 0 May 17 01:47:07.187116 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.187420 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.187491 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:47:07.187562 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.187627 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.187693 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.187759 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.187823 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.187887 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.187953 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.188016 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.188081 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.188152 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 01:47:07.188218 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:47:07.188281 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 01:47:07.188345 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:47:07.188409 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 01:47:07.188473 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:47:07.188537 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 01:47:07.188617 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:47:07.188683 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.188746 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.188811 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.188874 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.188939 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189001 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189065 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189134 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189199 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189263 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189326 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189390 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189454 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189517 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189581 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.189646 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.189712 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.189776 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 01:47:07.189842 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:47:07.189906 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 01:47:07.189970 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 01:47:07.190034 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:47:07.190098 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 01:47:07.190168 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 01:47:07.190232 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:47:07.190297 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 01:47:07.190361 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 01:47:07.190427 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:47:07.190486 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 01:47:07.190546 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 01:47:07.190616 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 01:47:07.190677 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:47:07.190744 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 01:47:07.190805 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:47:07.190881 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 01:47:07.190943 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:47:07.191010 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 01:47:07.191070 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:47:07.191081 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 01:47:07.191212 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.191279 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.191342 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.191407 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.191468 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 01:47:07.191529 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 01:47:07.191540 kernel: PCI host bridge to bus 0001:00 May 17 01:47:07.191603 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 01:47:07.191669 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 01:47:07.191729 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 01:47:07.191802 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:47:07.191874 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:47:07.191938 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 01:47:07.192002 kernel: pci 0001:00:01.0: supports D1 D2 May 17 01:47:07.192066 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192142 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:47:07.192208 kernel: pci 0001:00:02.0: supports D1 D2 May 17 01:47:07.192271 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192344 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:47:07.192410 kernel: pci 0001:00:03.0: supports D1 D2 May 17 01:47:07.192473 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192544 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:47:07.192612 kernel: pci 0001:00:04.0: supports D1 D2 May 17 01:47:07.192677 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 01:47:07.192688 kernel: acpiphp: Slot [1-6] registered May 17 01:47:07.192759 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:47:07.192826 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 01:47:07.192892 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 01:47:07.192958 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 01:47:07.193024 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:47:07.193099 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:47:07.193174 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 01:47:07.193240 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 01:47:07.193306 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 01:47:07.193317 kernel: acpiphp: Slot [2-6] registered May 17 01:47:07.193325 kernel: acpiphp: Slot [3-4] registered May 17 01:47:07.193333 kernel: acpiphp: Slot [4-4] registered May 17 01:47:07.193391 kernel: pci_bus 0001:00: on NUMA node 0 May 17 01:47:07.193456 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:47:07.193521 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:47:07.193586 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.193653 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:47:07.193718 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.193782 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.193846 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.193914 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.193978 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.194043 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.194107 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 01:47:07.194175 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 01:47:07.194239 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 01:47:07.194306 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 01:47:07.194370 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 01:47:07.194435 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 01:47:07.194499 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 01:47:07.194563 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 01:47:07.194628 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.194691 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.194755 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.194821 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.194886 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.194949 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195014 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195077 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195341 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195421 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195487 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195551 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195619 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195684 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195749 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.195813 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.195880 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 01:47:07.195947 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 01:47:07.196013 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 01:47:07.196078 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 01:47:07.196149 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 01:47:07.196214 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 01:47:07.196278 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 01:47:07.196342 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 01:47:07.196406 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 01:47:07.196469 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 01:47:07.196536 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.196600 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 01:47:07.196664 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 01:47:07.196729 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 01:47:07.196792 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 01:47:07.196856 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 01:47:07.196917 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 01:47:07.196974 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 01:47:07.197051 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 01:47:07.197112 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 01:47:07.197182 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 01:47:07.197242 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 01:47:07.197307 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 01:47:07.197370 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 01:47:07.197435 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 01:47:07.197495 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 01:47:07.197505 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 01:47:07.197574 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:47:07.197637 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:47:07.197703 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 01:47:07.197766 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:47:07.197828 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 01:47:07.197890 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 01:47:07.197901 kernel: PCI host bridge to bus 0004:00 May 17 01:47:07.197964 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 01:47:07.198021 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 01:47:07.198081 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 01:47:07.198155 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:47:07.198228 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:47:07.198293 kernel: pci 0004:00:01.0: supports D1 D2 May 17 01:47:07.198357 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 01:47:07.198427 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:47:07.198493 kernel: pci 0004:00:03.0: supports D1 D2 May 17 01:47:07.198559 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 01:47:07.198630 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:47:07.198695 kernel: pci 0004:00:05.0: supports D1 D2 May 17 01:47:07.198759 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 01:47:07.198833 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:47:07.198899 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 01:47:07.198969 kernel: pci 0004:01:00.0: supports D1 D2 May 17 01:47:07.199034 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:47:07.199111 kernel: pci_bus 0004:02: extended config space not accessible May 17 01:47:07.199190 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:47:07.199260 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 01:47:07.199330 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 01:47:07.199397 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 01:47:07.199466 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 01:47:07.199536 kernel: pci 0004:02:00.0: supports D1 D2 May 17 01:47:07.199604 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:47:07.199677 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 01:47:07.199744 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 01:47:07.199810 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:47:07.199871 kernel: pci_bus 0004:00: on NUMA node 0 May 17 01:47:07.199937 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 01:47:07.200004 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:47:07.200069 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:47:07.200136 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 01:47:07.200202 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:47:07.200266 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.200330 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:47:07.200395 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 01:47:07.200461 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:47:07.200526 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 01:47:07.200590 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:47:07.200655 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 01:47:07.200718 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:47:07.200783 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.200846 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.200913 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.200976 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201040 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201103 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201171 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201235 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201300 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201364 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201428 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201493 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201560 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 01:47:07.201627 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 01:47:07.201692 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 01:47:07.201764 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 01:47:07.201832 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 01:47:07.201901 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 01:47:07.201970 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 01:47:07.202038 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 01:47:07.202104 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 01:47:07.202173 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 01:47:07.202238 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 01:47:07.202302 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:47:07.202369 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 01:47:07.202433 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 01:47:07.202497 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 01:47:07.202564 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:47:07.202629 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 01:47:07.202692 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 01:47:07.202757 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:47:07.202816 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:47:07.202872 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 01:47:07.202932 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 01:47:07.203001 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 01:47:07.203061 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:47:07.203124 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 01:47:07.203195 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 01:47:07.203254 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:47:07.203324 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 01:47:07.203383 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:47:07.203393 kernel: iommu: Default domain type: Translated May 17 01:47:07.203402 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 01:47:07.203410 kernel: efivars: Registered efivars operations May 17 01:47:07.203477 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 17 01:47:07.203546 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 01:47:07.203615 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 01:47:07.203628 kernel: vgaarb: loaded May 17 01:47:07.203636 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 01:47:07.203645 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:47:07.203653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:47:07.203661 kernel: pnp: PnP ACPI init May 17 01:47:07.203730 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 01:47:07.203790 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 01:47:07.203851 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 01:47:07.203909 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 01:47:07.203968 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 01:47:07.204026 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 01:47:07.204086 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 01:47:07.204147 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 01:47:07.204158 kernel: pnp: PnP ACPI: found 1 devices May 17 01:47:07.204168 kernel: NET: Registered PF_INET protocol family May 17 01:47:07.204177 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204185 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 01:47:07.204193 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:47:07.204202 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 01:47:07.204210 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204218 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 01:47:07.204227 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204235 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:47:07.204245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:47:07.204312 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 01:47:07.204323 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 01:47:07.204331 kernel: kvm [1]: GICv3: no GICV resource entry May 17 01:47:07.204340 kernel: kvm [1]: disabling GICv2 emulation May 17 01:47:07.204348 kernel: kvm [1]: GIC system register CPU interface enabled May 17 01:47:07.204358 kernel: kvm [1]: vgic interrupt IRQ9 May 17 01:47:07.204366 kernel: kvm [1]: VHE mode initialized successfully May 17 01:47:07.204374 kernel: Initialise system trusted keyrings May 17 01:47:07.204383 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 01:47:07.204392 kernel: Key type asymmetric registered May 17 01:47:07.204399 kernel: Asymmetric key parser 'x509' registered May 17 01:47:07.204408 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 01:47:07.204416 kernel: io scheduler mq-deadline registered May 17 01:47:07.204424 kernel: io scheduler kyber registered May 17 01:47:07.204432 kernel: io scheduler bfq registered May 17 01:47:07.204440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 01:47:07.204448 kernel: ACPI: button: Power Button [PWRB] May 17 01:47:07.204458 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 01:47:07.204466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:47:07.204539 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 01:47:07.204601 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.204661 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.204722 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 01:47:07.204782 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 17 01:47:07.204843 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 17 01:47:07.204912 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 01:47:07.204972 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.205032 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.205091 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 01:47:07.205154 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 17 01:47:07.205216 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 17 01:47:07.205287 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 01:47:07.205347 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.205407 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.205466 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 01:47:07.205526 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 17 01:47:07.205585 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 17 01:47:07.205654 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 01:47:07.205714 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.205774 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.205835 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 01:47:07.205894 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 17 01:47:07.205954 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 17 01:47:07.206029 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 01:47:07.206093 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.206158 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.206219 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 01:47:07.206279 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 17 01:47:07.206339 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 17 01:47:07.206406 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 01:47:07.206470 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.206529 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.206590 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 01:47:07.206649 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 17 01:47:07.206709 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 17 01:47:07.206778 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 01:47:07.206838 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.206901 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.206961 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 01:47:07.207021 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 17 01:47:07.207080 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 17 01:47:07.207152 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 01:47:07.207212 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:47:07.207276 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:47:07.207335 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 01:47:07.207396 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 17 01:47:07.207458 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 17 01:47:07.207469 kernel: thunder_xcv, ver 1.0 May 17 01:47:07.207477 kernel: thunder_bgx, ver 1.0 May 17 01:47:07.207485 kernel: nicpf, ver 1.0 May 17 01:47:07.207495 kernel: nicvf, ver 1.0 May 17 01:47:07.207562 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 01:47:07.207622 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T01:47:05 UTC (1747446425) May 17 01:47:07.207633 kernel: efifb: probing for efifb May 17 01:47:07.207641 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 01:47:07.207649 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 01:47:07.207658 kernel: efifb: scrolling: redraw May 17 01:47:07.207666 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 01:47:07.207674 kernel: Console: switching to colour frame buffer device 100x37 May 17 01:47:07.207684 kernel: fb0: EFI VGA frame buffer device May 17 01:47:07.207692 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 01:47:07.207700 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:47:07.207708 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 01:47:07.207717 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 01:47:07.207725 kernel: watchdog: Hard watchdog permanently disabled May 17 01:47:07.207733 kernel: NET: Registered PF_INET6 protocol family May 17 01:47:07.207741 kernel: Segment Routing with IPv6 May 17 01:47:07.207749 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:47:07.207759 kernel: NET: Registered PF_PACKET protocol family May 17 01:47:07.207767 kernel: Key type dns_resolver registered May 17 01:47:07.207775 kernel: registered taskstats version 1 May 17 01:47:07.207782 kernel: Loading compiled-in X.509 certificates May 17 01:47:07.207791 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 01:47:07.207799 kernel: Key type .fscrypt registered May 17 01:47:07.207807 kernel: Key type fscrypt-provisioning registered May 17 01:47:07.207815 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 01:47:07.207823 kernel: ima: Allocated hash algorithm: sha1 May 17 01:47:07.207833 kernel: ima: No architecture policies found May 17 01:47:07.207841 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 01:47:07.207909 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 01:47:07.207974 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 01:47:07.208041 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 01:47:07.208107 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 01:47:07.208176 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 01:47:07.208241 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 01:47:07.208310 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 01:47:07.208375 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 01:47:07.208441 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 01:47:07.208506 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 01:47:07.208572 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 01:47:07.208637 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 01:47:07.208703 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 01:47:07.208768 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 01:47:07.208834 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 01:47:07.208901 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 01:47:07.208967 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 01:47:07.209031 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 01:47:07.209097 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 01:47:07.209165 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 01:47:07.209231 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 01:47:07.209295 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 01:47:07.209361 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 01:47:07.209428 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 01:47:07.209494 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 01:47:07.209558 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 01:47:07.209625 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 01:47:07.209689 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 01:47:07.209758 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 01:47:07.209822 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 01:47:07.209887 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 01:47:07.209956 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 01:47:07.210021 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 01:47:07.210086 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 01:47:07.210155 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 01:47:07.210221 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 01:47:07.210286 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 01:47:07.210351 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 01:47:07.210416 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 01:47:07.210480 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 01:47:07.210548 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 01:47:07.210613 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 01:47:07.210678 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 01:47:07.210742 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 01:47:07.210807 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 01:47:07.210872 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 01:47:07.210938 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 01:47:07.211003 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 01:47:07.211070 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 01:47:07.211138 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 01:47:07.211205 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 01:47:07.211269 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 01:47:07.211335 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 01:47:07.211398 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 01:47:07.211464 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 01:47:07.211528 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 01:47:07.211596 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 01:47:07.211660 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 01:47:07.211725 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 01:47:07.211789 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 01:47:07.211858 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 01:47:07.211869 kernel: clk: Disabling unused clocks May 17 01:47:07.211877 kernel: Freeing unused kernel memory: 39424K May 17 01:47:07.211885 kernel: Run /init as init process May 17 01:47:07.211895 kernel: with arguments: May 17 01:47:07.211903 kernel: /init May 17 01:47:07.211911 kernel: with environment: May 17 01:47:07.211919 kernel: HOME=/ May 17 01:47:07.211927 kernel: TERM=linux May 17 01:47:07.211934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:47:07.211945 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 01:47:07.211955 systemd[1]: Detected architecture arm64. May 17 01:47:07.211965 systemd[1]: Running in initrd. May 17 01:47:07.211973 systemd[1]: No hostname configured, using default hostname. May 17 01:47:07.211981 systemd[1]: Hostname set to . May 17 01:47:07.211989 systemd[1]: Initializing machine ID from random generator. May 17 01:47:07.211998 systemd[1]: Queued start job for default target initrd.target. May 17 01:47:07.212007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:47:07.212015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:47:07.212024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 01:47:07.212034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 01:47:07.212043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 01:47:07.212052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 01:47:07.212061 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 01:47:07.212070 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 01:47:07.212079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:47:07.212089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 01:47:07.212097 systemd[1]: Reached target paths.target - Path Units. May 17 01:47:07.212106 systemd[1]: Reached target slices.target - Slice Units. May 17 01:47:07.212116 systemd[1]: Reached target swap.target - Swaps. May 17 01:47:07.212124 systemd[1]: Reached target timers.target - Timer Units. May 17 01:47:07.212136 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:47:07.212144 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:47:07.212153 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 01:47:07.212161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 01:47:07.212172 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 01:47:07.212180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 01:47:07.212189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:47:07.212197 systemd[1]: Reached target sockets.target - Socket Units. May 17 01:47:07.212206 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 01:47:07.212214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 01:47:07.212223 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 01:47:07.212231 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:47:07.212240 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 01:47:07.212250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 01:47:07.212281 systemd-journald[898]: Collecting audit messages is disabled. May 17 01:47:07.212301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:07.212311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 01:47:07.212320 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:47:07.212328 kernel: Bridge firewalling registered May 17 01:47:07.212337 systemd-journald[898]: Journal started May 17 01:47:07.212356 systemd-journald[898]: Runtime Journal (/run/log/journal/420f97750a9c4d609642c4a7b5194bd3) is 8.0M, max 4.0G, 3.9G free. May 17 01:47:07.169768 systemd-modules-load[900]: Inserted module 'overlay' May 17 01:47:07.244079 systemd[1]: Started systemd-journald.service - Journal Service. May 17 01:47:07.191846 systemd-modules-load[900]: Inserted module 'br_netfilter' May 17 01:47:07.249614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:47:07.260373 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:47:07.271093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 01:47:07.281670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:07.309305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:47:07.315332 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 01:47:07.332478 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 01:47:07.354362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 01:47:07.372156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:07.388796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 01:47:07.395740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 01:47:07.406967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:47:07.439281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 01:47:07.452362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 01:47:07.460499 dracut-cmdline[939]: dracut-dracut-053 May 17 01:47:07.471554 dracut-cmdline[939]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 01:47:07.465866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 01:47:07.479719 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:47:07.488235 systemd-resolved[945]: Positive Trust Anchors: May 17 01:47:07.488244 systemd-resolved[945]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:47:07.488275 systemd-resolved[945]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 01:47:07.503188 systemd-resolved[945]: Defaulting to hostname 'linux'. May 17 01:47:07.516510 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 01:47:07.535540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 01:47:07.637864 kernel: SCSI subsystem initialized May 17 01:47:07.649141 kernel: Loading iSCSI transport class v2.0-870. May 17 01:47:07.668141 kernel: iscsi: registered transport (tcp) May 17 01:47:07.690144 kernel: iscsi: registered transport (qla4xxx) May 17 01:47:07.690170 kernel: QLogic iSCSI HBA Driver May 17 01:47:07.739605 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 01:47:07.761295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 01:47:07.807438 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:47:07.807470 kernel: device-mapper: uevent: version 1.0.3 May 17 01:47:07.817035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 01:47:07.883143 kernel: raid6: neonx8 gen() 15846 MB/s May 17 01:47:07.908142 kernel: raid6: neonx4 gen() 15714 MB/s May 17 01:47:07.933142 kernel: raid6: neonx2 gen() 13341 MB/s May 17 01:47:07.958141 kernel: raid6: neonx1 gen() 10530 MB/s May 17 01:47:07.983142 kernel: raid6: int64x8 gen() 6991 MB/s May 17 01:47:08.008142 kernel: raid6: int64x4 gen() 7384 MB/s May 17 01:47:08.033138 kernel: raid6: int64x2 gen() 6150 MB/s May 17 01:47:08.061077 kernel: raid6: int64x1 gen() 5077 MB/s May 17 01:47:08.061100 kernel: raid6: using algorithm neonx8 gen() 15846 MB/s May 17 01:47:08.095762 kernel: raid6: .... xor() 11961 MB/s, rmw enabled May 17 01:47:08.095783 kernel: raid6: using neon recovery algorithm May 17 01:47:08.115144 kernel: xor: measuring software checksum speed May 17 01:47:08.123138 kernel: 8regs : 19052 MB/sec May 17 01:47:08.134372 kernel: 32regs : 19422 MB/sec May 17 01:47:08.134392 kernel: arm64_neon : 27213 MB/sec May 17 01:47:08.141991 kernel: xor: using function: arm64_neon (27213 MB/sec) May 17 01:47:08.203144 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 01:47:08.214195 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 01:47:08.232310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:47:08.245227 systemd-udevd[1133]: Using default interface naming scheme 'v255'. May 17 01:47:08.248276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:47:08.263271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 01:47:08.277520 dracut-pre-trigger[1145]: rd.md=0: removing MD RAID activation May 17 01:47:08.305193 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:47:08.326315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 01:47:08.428020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:47:08.456743 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:47:08.456787 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:47:08.475264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 01:47:08.521844 kernel: ACPI: bus type USB registered May 17 01:47:08.521865 kernel: usbcore: registered new interface driver usbfs May 17 01:47:08.521875 kernel: usbcore: registered new interface driver hub May 17 01:47:08.521886 kernel: PTP clock support registered May 17 01:47:08.521895 kernel: usbcore: registered new device driver usb May 17 01:47:08.517048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 01:47:08.674604 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:47:08.674625 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:47:08.674635 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 17 01:47:08.674802 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 17 01:47:08.674897 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 01:47:08.674978 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 01:47:08.675058 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 01:47:08.675143 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 May 17 01:47:08.675235 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 01:47:08.675320 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 17 01:47:08.675407 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:47:08.675485 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 17 01:47:08.675571 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:5b:a4 May 17 01:47:08.672733 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:47:08.680136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:47:08.696150 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 01:47:08.746124 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 01:47:08.746271 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 01:47:08.746358 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 17 01:47:08.713339 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 01:47:08.761609 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:47:08.761669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:08.778452 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:47:08.789308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:47:08.789350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:08.806420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:08.824225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:08.836474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 01:47:09.040038 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 17 01:47:09.040198 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 01:47:09.040282 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 01:47:09.040359 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 01:47:09.040440 kernel: hub 1-0:1.0: USB hub found May 17 01:47:09.040540 kernel: hub 1-0:1.0: 4 ports detected May 17 01:47:09.040618 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 17 01:47:09.040707 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:47:09.040786 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 01:47:09.040878 kernel: hub 2-0:1.0: USB hub found May 17 01:47:09.040964 kernel: hub 2-0:1.0: 4 ports detected May 17 01:47:09.041042 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 01:47:09.041130 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 01:47:09.041220 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 01:47:09.041290 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 01:47:09.031540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:09.056266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:47:09.098146 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 01:47:09.098350 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 01:47:09.108138 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 01:47:09.113404 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 01:47:09.124842 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:5b:a5 May 17 01:47:09.136402 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 01:47:09.145895 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 01:47:09.172261 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:47:09.172319 kernel: GPT:9289727 != 1875385007 May 17 01:47:09.172340 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:47:09.172360 kernel: GPT:9289727 != 1875385007 May 17 01:47:09.172378 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:47:09.172397 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:47:09.176882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:09.293641 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 01:47:09.293798 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 01:47:09.293895 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1202) May 17 01:47:09.293906 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (1218) May 17 01:47:09.293917 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 01:47:09.247821 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 17 01:47:09.302704 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 17 01:47:09.335456 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 01:47:09.316221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 01:47:09.344303 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 01:47:09.356995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 01:47:09.387237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 01:47:09.414863 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:47:09.414884 disk-uuid[1303]: Primary Header is updated. May 17 01:47:09.414884 disk-uuid[1303]: Secondary Entries is updated. May 17 01:47:09.414884 disk-uuid[1303]: Secondary Header is updated. May 17 01:47:09.451228 kernel: hub 1-3:1.0: USB hub found May 17 01:47:09.451392 kernel: hub 1-3:1.0: 4 ports detected May 17 01:47:09.539142 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 01:47:09.576402 kernel: hub 2-3:1.0: USB hub found May 17 01:47:09.576690 kernel: hub 2-3:1.0: 4 ports detected May 17 01:47:09.673156 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:47:09.686137 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 01:47:09.709264 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 17 01:47:09.709430 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:47:10.056155 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 01:47:10.366144 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:47:10.380138 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 01:47:10.399140 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 01:47:10.412641 disk-uuid[1304]: The operation has completed successfully. May 17 01:47:10.418428 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:47:10.465026 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:47:10.465113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 01:47:10.495282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 01:47:10.505475 sh[1483]: Success May 17 01:47:10.524141 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 01:47:10.557416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 01:47:10.578255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 01:47:10.588969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 01:47:10.623040 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 01:47:10.623074 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:10.640380 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 01:47:10.654382 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 01:47:10.665799 kernel: BTRFS info (device dm-0): using free space tree May 17 01:47:10.686145 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 01:47:10.686544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 01:47:10.696730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 01:47:10.709291 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 01:47:10.715273 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 01:47:10.827180 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:10.827203 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:10.827219 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:47:10.827229 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:47:10.827241 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 01:47:10.827251 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:10.823189 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 01:47:10.850308 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 01:47:10.860949 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:47:10.892273 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 01:47:10.912019 systemd-networkd[1683]: lo: Link UP May 17 01:47:10.912025 systemd-networkd[1683]: lo: Gained carrier May 17 01:47:10.915599 systemd-networkd[1683]: Enumeration completed May 17 01:47:10.915708 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 01:47:10.917176 systemd-networkd[1683]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:10.940438 ignition[1672]: Ignition 2.19.0 May 17 01:47:10.923955 systemd[1]: Reached target network.target - Network. May 17 01:47:10.940444 ignition[1672]: Stage: fetch-offline May 17 01:47:10.949758 unknown[1672]: fetched base config from "system" May 17 01:47:10.940523 ignition[1672]: no configs at "/usr/lib/ignition/base.d" May 17 01:47:10.949765 unknown[1672]: fetched user config from "system" May 17 01:47:10.940531 ignition[1672]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:10.952462 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:47:10.940879 ignition[1672]: parsed url from cmdline: "" May 17 01:47:10.967135 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:47:10.940882 ignition[1672]: no config URL provided May 17 01:47:10.968637 systemd-networkd[1683]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:10.940886 ignition[1672]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:47:10.977271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 01:47:10.940940 ignition[1672]: parsing config with SHA512: 48bdeef7bc0348126e334dda4df7e44fd422dbd9714c6ac62cb1a3316dce390d6fe28b073b409c0dbce38a5b645be2c37f13445b9877c86d55830204b98a6925 May 17 01:47:11.019697 systemd-networkd[1683]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:10.950245 ignition[1672]: fetch-offline: fetch-offline passed May 17 01:47:10.950250 ignition[1672]: POST message to Packet Timeline May 17 01:47:10.950256 ignition[1672]: POST Status error: resource requires networking May 17 01:47:10.950318 ignition[1672]: Ignition finished successfully May 17 01:47:11.003756 ignition[1709]: Ignition 2.19.0 May 17 01:47:11.003763 ignition[1709]: Stage: kargs May 17 01:47:11.003929 ignition[1709]: no configs at "/usr/lib/ignition/base.d" May 17 01:47:11.003938 ignition[1709]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:11.005057 ignition[1709]: kargs: kargs passed May 17 01:47:11.005062 ignition[1709]: POST message to Packet Timeline May 17 01:47:11.005074 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:11.007744 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60840->[::1]:53: read: connection refused May 17 01:47:11.207861 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:47:11.208309 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49792->[::1]:53: read: connection refused May 17 01:47:11.599149 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 01:47:11.602276 systemd-networkd[1683]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:47:11.609100 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:47:11.609542 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43443->[::1]:53: read: connection refused May 17 01:47:12.235148 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 01:47:12.238031 systemd-networkd[1683]: eno1: Link UP May 17 01:47:12.238167 systemd-networkd[1683]: eno2: Link UP May 17 01:47:12.238292 systemd-networkd[1683]: enP1p1s0f0np0: Link UP May 17 01:47:12.238435 systemd-networkd[1683]: enP1p1s0f0np0: Gained carrier May 17 01:47:12.245282 systemd-networkd[1683]: enP1p1s0f1np1: Link UP May 17 01:47:12.284180 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv4 address 147.28.150.2/30, gateway 147.28.150.1 acquired from 147.28.144.140 May 17 01:47:12.410639 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:47:12.411247 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47826->[::1]:53: read: connection refused May 17 01:47:12.605494 systemd-networkd[1683]: enP1p1s0f1np1: Gained carrier May 17 01:47:13.277356 systemd-networkd[1683]: enP1p1s0f0np0: Gained IPv6LL May 17 01:47:14.012085 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:47:14.012875 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57994->[::1]:53: read: connection refused May 17 01:47:14.045316 systemd-networkd[1683]: enP1p1s0f1np1: Gained IPv6LL May 17 01:47:17.216098 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:47:17.719746 ignition[1709]: GET result: OK May 17 01:47:17.995607 ignition[1709]: Ignition finished successfully May 17 01:47:17.998243 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 01:47:18.019252 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 01:47:18.034574 ignition[1731]: Ignition 2.19.0 May 17 01:47:18.034581 ignition[1731]: Stage: disks May 17 01:47:18.034794 ignition[1731]: no configs at "/usr/lib/ignition/base.d" May 17 01:47:18.034803 ignition[1731]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:18.036248 ignition[1731]: disks: disks passed May 17 01:47:18.036253 ignition[1731]: POST message to Packet Timeline May 17 01:47:18.036267 ignition[1731]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:18.934120 ignition[1731]: GET result: OK May 17 01:47:19.348695 ignition[1731]: Ignition finished successfully May 17 01:47:19.352231 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 01:47:19.357535 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 01:47:19.365151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 01:47:19.373210 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 01:47:19.381854 systemd[1]: Reached target sysinit.target - System Initialization. May 17 01:47:19.390802 systemd[1]: Reached target basic.target - Basic System. May 17 01:47:19.411280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 01:47:19.426457 systemd-fsck[1750]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 01:47:19.430010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 01:47:19.452217 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 01:47:19.520140 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 01:47:19.520520 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 01:47:19.530685 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 01:47:19.552211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:47:19.644168 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1761) May 17 01:47:19.644186 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:19.644197 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:19.644207 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:47:19.644217 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:47:19.644227 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 01:47:19.558277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 01:47:19.653723 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 01:47:19.660947 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 01:47:19.676494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:47:19.676538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:47:19.689717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:47:19.703216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 01:47:19.724783 coreos-metadata[1781]: May 17 01:47:19.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:47:19.737725 coreos-metadata[1782]: May 17 01:47:19.710 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:47:19.727241 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 01:47:19.759867 initrd-setup-root[1811]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:47:19.765904 initrd-setup-root[1819]: cut: /sysroot/etc/group: No such file or directory May 17 01:47:19.772277 initrd-setup-root[1827]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:47:19.778373 initrd-setup-root[1835]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:47:19.848983 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 01:47:19.872205 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 01:47:19.880139 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:19.903306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 01:47:19.914305 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 01:47:19.928932 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 01:47:19.934407 ignition[1912]: INFO : Ignition 2.19.0 May 17 01:47:19.934407 ignition[1912]: INFO : Stage: mount May 17 01:47:19.934407 ignition[1912]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:47:19.934407 ignition[1912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:19.934407 ignition[1912]: INFO : mount: mount passed May 17 01:47:19.934407 ignition[1912]: INFO : POST message to Packet Timeline May 17 01:47:19.934407 ignition[1912]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:20.207981 coreos-metadata[1781]: May 17 01:47:20.207 INFO Fetch successful May 17 01:47:20.214460 coreos-metadata[1782]: May 17 01:47:20.214 INFO Fetch successful May 17 01:47:20.252698 coreos-metadata[1781]: May 17 01:47:20.252 INFO wrote hostname ci-4081.3.3-n-a9b446c9a0 to /sysroot/etc/hostname May 17 01:47:20.255767 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:47:20.266796 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:47:20.266894 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 01:47:20.567763 ignition[1912]: INFO : GET result: OK May 17 01:47:20.860752 ignition[1912]: INFO : Ignition finished successfully May 17 01:47:20.863061 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 01:47:20.883192 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 01:47:20.894965 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:47:20.929432 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1939) May 17 01:47:20.929469 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 01:47:20.943682 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:47:20.956529 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:47:20.979049 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:47:20.979070 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 01:47:20.987198 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:47:21.018433 ignition[1958]: INFO : Ignition 2.19.0 May 17 01:47:21.018433 ignition[1958]: INFO : Stage: files May 17 01:47:21.027479 ignition[1958]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:47:21.027479 ignition[1958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:21.027479 ignition[1958]: DEBUG : files: compiled without relabeling support, skipping May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:47:21.027479 ignition[1958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 01:47:21.027479 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 01:47:21.023894 unknown[1958]: wrote ssh authorized keys file for user: core May 17 01:47:21.138274 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 01:47:21.194265 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.204940 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 01:47:21.631398 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 01:47:21.948046 ignition[1958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:47:21.948046 ignition[1958]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:47:21.972524 ignition[1958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:47:21.972524 ignition[1958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:47:21.972524 ignition[1958]: INFO : files: files passed May 17 01:47:21.972524 ignition[1958]: INFO : POST message to Packet Timeline May 17 01:47:21.972524 ignition[1958]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:22.629516 ignition[1958]: INFO : GET result: OK May 17 01:47:22.928164 ignition[1958]: INFO : Ignition finished successfully May 17 01:47:22.931205 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 01:47:22.949267 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 01:47:22.955894 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 01:47:22.967472 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:47:22.967547 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 01:47:23.002207 initrd-setup-root-after-ignition[1997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:47:23.002207 initrd-setup-root-after-ignition[1997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 01:47:22.985563 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:47:23.047676 initrd-setup-root-after-ignition[2001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:47:22.998054 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 01:47:23.018325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 01:47:23.050152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:47:23.050228 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 01:47:23.064599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 01:47:23.075593 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 01:47:23.092316 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 01:47:23.107239 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 01:47:23.130654 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:47:23.168250 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 01:47:23.182359 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 01:47:23.191421 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:47:23.202697 systemd[1]: Stopped target timers.target - Timer Units. May 17 01:47:23.214025 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:47:23.214126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:47:23.225428 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 01:47:23.236376 systemd[1]: Stopped target basic.target - Basic System. May 17 01:47:23.247572 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 01:47:23.258748 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:47:23.269753 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 01:47:23.280722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 01:47:23.291720 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:47:23.302745 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 01:47:23.313695 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 01:47:23.330167 systemd[1]: Stopped target swap.target - Swaps. May 17 01:47:23.341253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:47:23.341351 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 01:47:23.352681 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 01:47:23.363605 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:47:23.374747 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 01:47:23.378174 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:47:23.385934 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:47:23.386031 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 01:47:23.397198 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:47:23.397337 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:47:23.408319 systemd[1]: Stopped target paths.target - Path Units. May 17 01:47:23.419349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:47:23.423156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:47:23.436224 systemd[1]: Stopped target slices.target - Slice Units. May 17 01:47:23.447567 systemd[1]: Stopped target sockets.target - Socket Units. May 17 01:47:23.458935 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:47:23.459064 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:47:23.470377 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:47:23.569588 ignition[2023]: INFO : Ignition 2.19.0 May 17 01:47:23.569588 ignition[2023]: INFO : Stage: umount May 17 01:47:23.569588 ignition[2023]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:47:23.569588 ignition[2023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:47:23.569588 ignition[2023]: INFO : umount: umount passed May 17 01:47:23.569588 ignition[2023]: INFO : POST message to Packet Timeline May 17 01:47:23.569588 ignition[2023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:47:23.470472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:47:23.481893 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:47:23.481981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:47:23.493328 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:47:23.493413 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 01:47:23.504742 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:47:23.504825 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:47:23.532273 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 01:47:23.539454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:47:23.539557 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:47:23.552486 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 01:47:23.563723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:47:23.563830 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:47:23.575363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:47:23.575449 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:47:23.588841 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:47:23.590917 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:47:23.591003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 01:47:23.630610 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:47:23.630859 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 01:47:24.058802 ignition[2023]: INFO : GET result: OK May 17 01:47:24.390655 ignition[2023]: INFO : Ignition finished successfully May 17 01:47:24.393938 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:47:24.394234 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 01:47:24.400896 systemd[1]: Stopped target network.target - Network. May 17 01:47:24.409825 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:47:24.409878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 01:47:24.419382 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:47:24.419414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 01:47:24.428875 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:47:24.428930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 01:47:24.438397 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 01:47:24.438457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 01:47:24.448138 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:47:24.448166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 01:47:24.457967 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 01:47:24.463153 systemd-networkd[1683]: enP1p1s0f0np0: DHCPv6 lease lost May 17 01:47:24.467416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 01:47:24.471240 systemd-networkd[1683]: enP1p1s0f1np1: DHCPv6 lease lost May 17 01:47:24.478670 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:47:24.478945 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 01:47:24.493969 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:47:24.494608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 01:47:24.502742 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:47:24.502879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 01:47:24.524259 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 01:47:24.530858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:47:24.530906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:47:24.540780 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:47:24.540813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 01:47:24.550645 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:47:24.550676 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 01:47:24.560603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 01:47:24.560632 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:47:24.570904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:47:24.595458 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:47:24.595587 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:47:24.604256 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:47:24.604426 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 01:47:24.613124 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:47:24.613158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:47:24.623598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:47:24.623637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 01:47:24.644593 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:47:24.644632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 01:47:24.655284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:47:24.655341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:47:24.677322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 01:47:24.693303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 01:47:24.693362 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:47:24.709576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:47:24.709604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:24.721549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:47:24.721641 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 01:47:25.259026 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:47:25.259199 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 01:47:25.270322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 01:47:25.293244 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 01:47:25.301707 systemd[1]: Switching root. May 17 01:47:25.359298 systemd-journald[898]: Journal stopped May 17 01:47:27.325889 systemd-journald[898]: Received SIGTERM from PID 1 (systemd). May 17 01:47:27.325918 kernel: SELinux: policy capability network_peer_controls=1 May 17 01:47:27.325928 kernel: SELinux: policy capability open_perms=1 May 17 01:47:27.325936 kernel: SELinux: policy capability extended_socket_class=1 May 17 01:47:27.325944 kernel: SELinux: policy capability always_check_network=0 May 17 01:47:27.325952 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 01:47:27.325961 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 01:47:27.325971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 01:47:27.325979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 01:47:27.325987 kernel: audit: type=1403 audit(1747446445.584:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 01:47:27.325996 systemd[1]: Successfully loaded SELinux policy in 115.490ms. May 17 01:47:27.326006 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.536ms. May 17 01:47:27.326016 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 01:47:27.326025 systemd[1]: Detected architecture arm64. May 17 01:47:27.326036 systemd[1]: Detected first boot. May 17 01:47:27.326046 systemd[1]: Hostname set to . May 17 01:47:27.326055 systemd[1]: Initializing machine ID from random generator. May 17 01:47:27.326064 zram_generator::config[2118]: No configuration found. May 17 01:47:27.326075 systemd[1]: Populated /etc with preset unit settings. May 17 01:47:27.326084 systemd[1]: Queued start job for default target multi-user.target. May 17 01:47:27.326095 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 01:47:27.326105 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 01:47:27.326114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 01:47:27.326123 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 01:47:27.326136 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 01:47:27.326146 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 01:47:27.326157 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 01:47:27.326167 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 01:47:27.326176 systemd[1]: Created slice user.slice - User and Session Slice. May 17 01:47:27.326186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:47:27.326196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:47:27.326205 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 01:47:27.326215 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 01:47:27.326226 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 01:47:27.326235 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 01:47:27.326245 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 01:47:27.326254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:47:27.326264 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 01:47:27.326273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:47:27.326282 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 01:47:27.326293 systemd[1]: Reached target slices.target - Slice Units. May 17 01:47:27.326303 systemd[1]: Reached target swap.target - Swaps. May 17 01:47:27.326314 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 01:47:27.326324 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 01:47:27.326333 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 01:47:27.326343 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 01:47:27.326353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 01:47:27.326362 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 01:47:27.326372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:47:27.326383 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 01:47:27.326392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 01:47:27.326402 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 01:47:27.326412 systemd[1]: Mounting media.mount - External Media Directory... May 17 01:47:27.326422 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 01:47:27.326433 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 01:47:27.326443 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 01:47:27.326453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 01:47:27.326463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:47:27.326473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 01:47:27.326482 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 01:47:27.326492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:47:27.326502 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 01:47:27.326513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:47:27.326522 kernel: ACPI: bus type drm_connector registered May 17 01:47:27.326531 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 01:47:27.326540 kernel: fuse: init (API version 7.39) May 17 01:47:27.326549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:47:27.326559 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 01:47:27.326569 kernel: loop: module loaded May 17 01:47:27.326578 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 01:47:27.326588 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 01:47:27.326599 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 01:47:27.326609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 01:47:27.326633 systemd-journald[2243]: Collecting audit messages is disabled. May 17 01:47:27.326653 systemd-journald[2243]: Journal started May 17 01:47:27.326674 systemd-journald[2243]: Runtime Journal (/run/log/journal/6c869a9318e943be9e2ffeb5ec47773b) is 8.0M, max 4.0G, 3.9G free. May 17 01:47:27.351149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 01:47:27.379146 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 01:47:27.400146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 01:47:27.420147 systemd[1]: Started systemd-journald.service - Journal Service. May 17 01:47:27.426006 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 01:47:27.431723 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 01:47:27.437281 systemd[1]: Mounted media.mount - External Media Directory. May 17 01:47:27.442757 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 01:47:27.448228 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 01:47:27.453630 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 01:47:27.459161 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 01:47:27.464698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:47:27.470312 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 01:47:27.470454 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 01:47:27.476014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:47:27.476159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:47:27.481569 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:47:27.481705 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 01:47:27.487175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:47:27.487311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:47:27.492623 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 01:47:27.492757 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 01:47:27.498350 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:47:27.498537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:47:27.504025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 01:47:27.509304 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 01:47:27.514729 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 01:47:27.520656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:47:27.534924 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 01:47:27.558436 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 01:47:27.564510 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 01:47:27.569435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 01:47:27.570935 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 01:47:27.577249 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 01:47:27.582192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:47:27.583347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 01:47:27.588307 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 01:47:27.588683 systemd-journald[2243]: Time spent on flushing to /var/log/journal/6c869a9318e943be9e2ffeb5ec47773b is 25.988ms for 2334 entries. May 17 01:47:27.588683 systemd-journald[2243]: System Journal (/var/log/journal/6c869a9318e943be9e2ffeb5ec47773b) is 8.0M, max 195.6M, 187.6M free. May 17 01:47:27.634010 systemd-journald[2243]: Received client request to flush runtime journal. May 17 01:47:27.589472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 01:47:27.607192 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 01:47:27.613208 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 01:47:27.619347 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 01:47:27.624379 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 01:47:27.629114 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 01:47:27.632383 systemd-tmpfiles[2281]: ACLs are not supported, ignoring. May 17 01:47:27.632393 systemd-tmpfiles[2281]: ACLs are not supported, ignoring. May 17 01:47:27.634252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 01:47:27.639371 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 01:47:27.643922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 01:47:27.653430 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 01:47:27.672440 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 01:47:27.677213 udevadm[2282]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 01:47:27.691815 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 01:47:27.709278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 01:47:27.724010 systemd-tmpfiles[2300]: ACLs are not supported, ignoring. May 17 01:47:27.724023 systemd-tmpfiles[2300]: ACLs are not supported, ignoring. May 17 01:47:27.727523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:47:27.948528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 01:47:27.963302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:47:27.986867 systemd-udevd[2309]: Using default interface naming scheme 'v255'. May 17 01:47:28.000124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:47:28.028692 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 01:47:28.046143 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2335) May 17 01:47:28.047147 kernel: IPMI message handler: version 39.2 May 17 01:47:28.068142 kernel: ipmi device interface May 17 01:47:28.075265 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 17 01:47:28.080147 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 01:47:28.080188 kernel: ipmi_si: IPMI System Interface driver May 17 01:47:28.081136 kernel: ipmi_si: Unable to find any System Interface(s) May 17 01:47:28.142644 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 01:47:28.162300 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 01:47:28.168175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:47:28.177993 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 01:47:28.184644 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 01:47:28.194028 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 01:47:28.201933 lvm[2407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:47:28.210374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:47:28.241511 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 01:47:28.247060 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 01:47:28.250758 systemd-networkd[2320]: lo: Link UP May 17 01:47:28.250764 systemd-networkd[2320]: lo: Gained carrier May 17 01:47:28.254335 systemd-networkd[2320]: bond0: netdev ready May 17 01:47:28.263108 systemd-networkd[2320]: Enumeration completed May 17 01:47:28.266479 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 01:47:28.270743 lvm[2432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:47:28.271451 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 01:47:28.271479 systemd-networkd[2320]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:b3:e4.network. May 17 01:47:28.278338 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 01:47:28.304555 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 01:47:28.309774 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 01:47:28.314567 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 01:47:28.314593 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 01:47:28.319520 systemd[1]: Reached target machines.target - Containers. May 17 01:47:28.325094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 01:47:28.339327 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 01:47:28.345572 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 01:47:28.350659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:47:28.351617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 01:47:28.357822 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 01:47:28.364240 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 01:47:28.370235 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 01:47:28.376514 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 01:47:28.377137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 01:47:28.378168 kernel: loop0: detected capacity change from 0 to 114328 May 17 01:47:28.393246 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 01:47:28.394138 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 01:47:28.440143 kernel: loop1: detected capacity change from 0 to 8 May 17 01:47:28.495141 kernel: loop2: detected capacity change from 0 to 114432 May 17 01:47:28.555141 kernel: loop3: detected capacity change from 0 to 203944 May 17 01:47:28.579695 ldconfig[2440]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 01:47:28.581556 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 01:47:28.617150 kernel: loop4: detected capacity change from 0 to 114328 May 17 01:47:28.633147 kernel: loop5: detected capacity change from 0 to 8 May 17 01:47:28.644145 kernel: loop6: detected capacity change from 0 to 114432 May 17 01:47:28.660145 kernel: loop7: detected capacity change from 0 to 203944 May 17 01:47:28.666081 (sd-merge)[2489]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 17 01:47:28.666541 (sd-merge)[2489]: Merged extensions into '/usr'. May 17 01:47:28.669477 systemd[1]: Reloading requested from client PID 2448 ('systemd-sysext') (unit systemd-sysext.service)... May 17 01:47:28.669492 systemd[1]: Reloading... May 17 01:47:28.710142 zram_generator::config[2519]: No configuration found. May 17 01:47:28.809612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:47:28.869489 systemd[1]: Reloading finished in 199 ms. May 17 01:47:28.885671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 01:47:28.908346 systemd[1]: Starting ensure-sysext.service... May 17 01:47:28.914319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 01:47:28.921100 systemd[1]: Reloading requested from client PID 2576 ('systemctl') (unit ensure-sysext.service)... May 17 01:47:28.921113 systemd[1]: Reloading... May 17 01:47:28.931761 systemd-tmpfiles[2577]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 01:47:28.932011 systemd-tmpfiles[2577]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 01:47:28.932647 systemd-tmpfiles[2577]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 01:47:28.932854 systemd-tmpfiles[2577]: ACLs are not supported, ignoring. May 17 01:47:28.932900 systemd-tmpfiles[2577]: ACLs are not supported, ignoring. May 17 01:47:28.935236 systemd-tmpfiles[2577]: Detected autofs mount point /boot during canonicalization of boot. May 17 01:47:28.935243 systemd-tmpfiles[2577]: Skipping /boot May 17 01:47:28.942031 systemd-tmpfiles[2577]: Detected autofs mount point /boot during canonicalization of boot. May 17 01:47:28.942039 systemd-tmpfiles[2577]: Skipping /boot May 17 01:47:28.966140 zram_generator::config[2609]: No configuration found. May 17 01:47:29.060538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:47:29.120426 systemd[1]: Reloading finished in 199 ms. May 17 01:47:29.137994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:47:29.171744 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 01:47:29.178320 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 01:47:29.183571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:47:29.184629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:47:29.190774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:47:29.196956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:47:29.202080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:47:29.203529 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 01:47:29.210491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 01:47:29.211545 augenrules[2694]: No rules May 17 01:47:29.216789 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 01:47:29.222700 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 01:47:29.227924 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 01:47:29.232850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:47:29.232984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:47:29.237962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:47:29.238087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:47:29.242946 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:47:29.243130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:47:29.248531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 01:47:29.270640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:47:29.271958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:47:29.278156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:47:29.284310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:47:29.289218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:47:29.290670 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 01:47:29.295530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:47:29.296548 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 01:47:29.301436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:47:29.301570 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:47:29.306483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:47:29.306614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:47:29.311247 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:47:29.311432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:47:29.316331 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 01:47:29.320668 systemd-resolved[2695]: Positive Trust Anchors: May 17 01:47:29.320679 systemd-resolved[2695]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:47:29.320712 systemd-resolved[2695]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 01:47:29.324190 systemd-resolved[2695]: Using system hostname 'ci-4081.3.3-n-a9b446c9a0'. May 17 01:47:29.327279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:47:29.342454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:47:29.347994 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 01:47:29.353480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:47:29.359256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:47:29.364006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:47:29.364123 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:47:29.364978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:47:29.365125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:47:29.370066 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:47:29.370208 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 01:47:29.374958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:47:29.375086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:47:29.380051 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:47:29.380250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:47:29.386843 systemd[1]: Finished ensure-sysext.service. May 17 01:47:29.394887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:47:29.394951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 01:47:29.410346 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 01:47:29.455011 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 01:47:29.459433 systemd[1]: Reached target time-set.target - System Time Set. May 17 01:47:29.885138 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 01:47:29.902142 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 17 01:47:29.902777 systemd-networkd[2320]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:b3:e5.network. May 17 01:47:30.508139 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 01:47:30.524159 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 17 01:47:30.524303 systemd-networkd[2320]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 01:47:30.525588 systemd-networkd[2320]: enP1p1s0f0np0: Link UP May 17 01:47:30.525855 systemd-networkd[2320]: enP1p1s0f0np0: Gained carrier May 17 01:47:30.526815 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 01:47:30.543139 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:47:30.547352 systemd[1]: Reached target network.target - Network. May 17 01:47:30.551733 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 01:47:30.554686 systemd-networkd[2320]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:b3:e4.network. May 17 01:47:30.554941 systemd-networkd[2320]: enP1p1s0f1np1: Link UP May 17 01:47:30.555216 systemd-networkd[2320]: enP1p1s0f1np1: Gained carrier May 17 01:47:30.557205 systemd[1]: Reached target sysinit.target - System Initialization. May 17 01:47:30.561573 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 01:47:30.565461 systemd-networkd[2320]: bond0: Link UP May 17 01:47:30.565739 systemd-networkd[2320]: bond0: Gained carrier May 17 01:47:30.565925 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 01:47:30.565942 systemd-timesyncd[2743]: Network configuration changed, trying to establish connection. May 17 01:47:30.570425 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 01:47:30.574846 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 01:47:30.579222 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 01:47:30.583636 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 01:47:30.583658 systemd[1]: Reached target paths.target - Path Units. May 17 01:47:30.587952 systemd[1]: Reached target timers.target - Timer Units. May 17 01:47:30.592642 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 01:47:30.598600 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 01:47:30.603953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 01:47:30.608939 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 01:47:30.613367 systemd[1]: Reached target sockets.target - Socket Units. May 17 01:47:30.617653 systemd[1]: Reached target basic.target - Basic System. May 17 01:47:30.622053 systemd[1]: System is tainted: cgroupsv1 May 17 01:47:30.622089 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 01:47:30.622107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 01:47:30.623322 systemd[1]: Starting containerd.service - containerd container runtime... May 17 01:47:30.629179 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 01:47:30.632138 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 17 01:47:30.632208 kernel: bond0: active interface up! May 17 01:47:30.655708 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 01:47:30.661570 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 01:47:30.667385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 01:47:30.671671 coreos-metadata[2748]: May 17 01:47:30.671 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:47:30.671988 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 01:47:30.672747 jq[2753]: false May 17 01:47:30.673222 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 01:47:30.677171 dbus-daemon[2750]: [system] SELinux support is enabled May 17 01:47:30.680256 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 01:47:30.686210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 01:47:30.687253 extend-filesystems[2754]: Found loop4 May 17 01:47:30.695968 extend-filesystems[2754]: Found loop5 May 17 01:47:30.695968 extend-filesystems[2754]: Found loop6 May 17 01:47:30.695968 extend-filesystems[2754]: Found loop7 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme1n1 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p1 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p2 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p3 May 17 01:47:30.695968 extend-filesystems[2754]: Found usr May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p4 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p6 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p7 May 17 01:47:30.695968 extend-filesystems[2754]: Found nvme0n1p9 May 17 01:47:30.695968 extend-filesystems[2754]: Checking size of /dev/nvme0n1p9 May 17 01:47:30.850861 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 17 01:47:30.850891 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2333) May 17 01:47:30.850904 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 17 01:47:30.692369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 01:47:30.831011 dbus-daemon[2750]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 01:47:30.851141 extend-filesystems[2754]: Resized partition /dev/nvme0n1p9 May 17 01:47:30.703824 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 01:47:30.859852 extend-filesystems[2775]: resize2fs 1.47.1 (20-May-2024) May 17 01:47:30.757967 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 01:47:30.759471 systemd[1]: Starting update-engine.service - Update Engine... May 17 01:47:30.766488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 01:47:30.869297 update_engine[2783]: I20250517 01:47:30.805724 2783 main.cc:92] Flatcar Update Engine starting May 17 01:47:30.869297 update_engine[2783]: I20250517 01:47:30.808430 2783 update_check_scheduler.cc:74] Next update check in 5m11s May 17 01:47:30.774775 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 01:47:30.869667 jq[2784]: true May 17 01:47:30.788016 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 01:47:30.788464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 01:47:30.870082 tar[2790]: linux-arm64/helm May 17 01:47:30.788745 systemd[1]: motdgen.service: Deactivated successfully. May 17 01:47:30.870465 jq[2792]: true May 17 01:47:30.789149 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 01:47:30.793606 systemd-logind[2777]: Watching system buttons on /dev/input/event0 (Power Button) May 17 01:47:30.797449 systemd-logind[2777]: New seat seat0. May 17 01:47:30.871069 bash[2815]: Updated "/home/core/.ssh/authorized_keys" May 17 01:47:30.798144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 01:47:30.798701 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 01:47:30.814213 systemd[1]: Started systemd-logind.service - User Login Management. May 17 01:47:30.819123 (ntainerd)[2793]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 01:47:30.836076 systemd[1]: Started update-engine.service - Update Engine. May 17 01:47:30.846801 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 01:47:30.847080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 01:47:30.855571 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 01:47:30.855706 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 01:47:30.865093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 01:47:30.866958 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 01:47:30.876663 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 01:47:30.886596 systemd[1]: Starting sshkeys.service... May 17 01:47:30.899749 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 01:47:30.900393 locksmithd[2816]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 01:47:30.905686 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 01:47:30.925468 coreos-metadata[2834]: May 17 01:47:30.925 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:47:30.967556 containerd[2793]: time="2025-05-17T01:47:30.967468120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 01:47:30.990017 containerd[2793]: time="2025-05-17T01:47:30.989979480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.991303 containerd[2793]: time="2025-05-17T01:47:30.991275400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 01:47:30.991325 containerd[2793]: time="2025-05-17T01:47:30.991301680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 01:47:30.991325 containerd[2793]: time="2025-05-17T01:47:30.991317000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 01:47:30.991474 containerd[2793]: time="2025-05-17T01:47:30.991461520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 01:47:30.991502 containerd[2793]: time="2025-05-17T01:47:30.991480360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.991549 containerd[2793]: time="2025-05-17T01:47:30.991533120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:47:30.991570 containerd[2793]: time="2025-05-17T01:47:30.991547520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.992478 containerd[2793]: time="2025-05-17T01:47:30.992421600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:47:30.992505 containerd[2793]: time="2025-05-17T01:47:30.992478520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.992505 containerd[2793]: time="2025-05-17T01:47:30.992497200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:47:30.992544 containerd[2793]: time="2025-05-17T01:47:30.992509640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.992642 containerd[2793]: time="2025-05-17T01:47:30.992626880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.992864 containerd[2793]: time="2025-05-17T01:47:30.992845600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 01:47:30.993017 containerd[2793]: time="2025-05-17T01:47:30.992998320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:47:30.993041 containerd[2793]: time="2025-05-17T01:47:30.993015000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 01:47:30.993118 containerd[2793]: time="2025-05-17T01:47:30.993102520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 01:47:30.993172 containerd[2793]: time="2025-05-17T01:47:30.993159280Z" level=info msg="metadata content store policy set" policy=shared May 17 01:47:31.000233 containerd[2793]: time="2025-05-17T01:47:31.000199840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 01:47:31.000273 containerd[2793]: time="2025-05-17T01:47:31.000258000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 01:47:31.000323 containerd[2793]: time="2025-05-17T01:47:31.000273760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 01:47:31.000323 containerd[2793]: time="2025-05-17T01:47:31.000288520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 01:47:31.000323 containerd[2793]: time="2025-05-17T01:47:31.000303440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 01:47:31.000455 containerd[2793]: time="2025-05-17T01:47:31.000439520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 01:47:31.001294 containerd[2793]: time="2025-05-17T01:47:31.001272480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 01:47:31.001461 containerd[2793]: time="2025-05-17T01:47:31.001446680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 01:47:31.001481 containerd[2793]: time="2025-05-17T01:47:31.001466160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 01:47:31.001507 containerd[2793]: time="2025-05-17T01:47:31.001484280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 01:47:31.001507 containerd[2793]: time="2025-05-17T01:47:31.001498200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001542 containerd[2793]: time="2025-05-17T01:47:31.001510760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001542 containerd[2793]: time="2025-05-17T01:47:31.001523160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001542 containerd[2793]: time="2025-05-17T01:47:31.001536880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001589 containerd[2793]: time="2025-05-17T01:47:31.001552080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001589 containerd[2793]: time="2025-05-17T01:47:31.001565000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001589 containerd[2793]: time="2025-05-17T01:47:31.001577560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001641 containerd[2793]: time="2025-05-17T01:47:31.001589240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 01:47:31.001641 containerd[2793]: time="2025-05-17T01:47:31.001607920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001641 containerd[2793]: time="2025-05-17T01:47:31.001621760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001641 containerd[2793]: time="2025-05-17T01:47:31.001633960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001711 containerd[2793]: time="2025-05-17T01:47:31.001646840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001711 containerd[2793]: time="2025-05-17T01:47:31.001659800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001711 containerd[2793]: time="2025-05-17T01:47:31.001677160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001711 containerd[2793]: time="2025-05-17T01:47:31.001690400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001711 containerd[2793]: time="2025-05-17T01:47:31.001703920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001795 containerd[2793]: time="2025-05-17T01:47:31.001717560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001795 containerd[2793]: time="2025-05-17T01:47:31.001733040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001795 containerd[2793]: time="2025-05-17T01:47:31.001743680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001795 containerd[2793]: time="2025-05-17T01:47:31.001755200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001795 containerd[2793]: time="2025-05-17T01:47:31.001766400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001795 containerd[2793]: time="2025-05-17T01:47:31.001782680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 01:47:31.001894 containerd[2793]: time="2025-05-17T01:47:31.001801320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001894 containerd[2793]: time="2025-05-17T01:47:31.001813080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 01:47:31.001894 containerd[2793]: time="2025-05-17T01:47:31.001823520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 01:47:31.001944 containerd[2793]: time="2025-05-17T01:47:31.001929800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 01:47:31.001962 containerd[2793]: time="2025-05-17T01:47:31.001946880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 01:47:31.001962 containerd[2793]: time="2025-05-17T01:47:31.001958320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 01:47:31.002001 containerd[2793]: time="2025-05-17T01:47:31.001970680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 01:47:31.002001 containerd[2793]: time="2025-05-17T01:47:31.001980800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 01:47:31.002001 containerd[2793]: time="2025-05-17T01:47:31.001996520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 01:47:31.002052 containerd[2793]: time="2025-05-17T01:47:31.002008800Z" level=info msg="NRI interface is disabled by configuration." May 17 01:47:31.002052 containerd[2793]: time="2025-05-17T01:47:31.002019600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 01:47:31.002396 containerd[2793]: time="2025-05-17T01:47:31.002347280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 01:47:31.002508 containerd[2793]: time="2025-05-17T01:47:31.002404400Z" level=info msg="Connect containerd service" May 17 01:47:31.002508 containerd[2793]: time="2025-05-17T01:47:31.002434040Z" level=info msg="using legacy CRI server" May 17 01:47:31.002508 containerd[2793]: time="2025-05-17T01:47:31.002441080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 01:47:31.002566 containerd[2793]: time="2025-05-17T01:47:31.002514880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 01:47:31.003087 containerd[2793]: time="2025-05-17T01:47:31.003065080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:47:31.003261 containerd[2793]: time="2025-05-17T01:47:31.003221720Z" level=info msg="Start subscribing containerd event" May 17 01:47:31.003326 containerd[2793]: time="2025-05-17T01:47:31.003279880Z" level=info msg="Start recovering state" May 17 01:47:31.003355 containerd[2793]: time="2025-05-17T01:47:31.003344520Z" level=info msg="Start event monitor" May 17 01:47:31.003375 containerd[2793]: time="2025-05-17T01:47:31.003358320Z" level=info msg="Start snapshots syncer" May 17 01:47:31.003375 containerd[2793]: time="2025-05-17T01:47:31.003368120Z" level=info msg="Start cni network conf syncer for default" May 17 01:47:31.003414 containerd[2793]: time="2025-05-17T01:47:31.003375320Z" level=info msg="Start streaming server" May 17 01:47:31.003505 containerd[2793]: time="2025-05-17T01:47:31.003489680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 01:47:31.003548 containerd[2793]: time="2025-05-17T01:47:31.003538600Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 01:47:31.003595 containerd[2793]: time="2025-05-17T01:47:31.003586400Z" level=info msg="containerd successfully booted in 0.036955s" May 17 01:47:31.003652 systemd[1]: Started containerd.service - containerd container runtime. May 17 01:47:31.030125 sshd_keygen[2780]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 01:47:31.048984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 01:47:31.065394 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 01:47:31.077296 systemd[1]: issuegen.service: Deactivated successfully. May 17 01:47:31.077509 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 01:47:31.084335 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 01:47:31.097139 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 01:47:31.103471 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 01:47:31.109278 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 01:47:31.114401 systemd[1]: Reached target getty.target - Login Prompts. May 17 01:47:31.128186 tar[2790]: linux-arm64/LICENSE May 17 01:47:31.128255 tar[2790]: linux-arm64/README.md May 17 01:47:31.149685 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 01:47:31.291145 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 17 01:47:31.309744 extend-filesystems[2775]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 01:47:31.309744 extend-filesystems[2775]: old_desc_blocks = 1, new_desc_blocks = 112 May 17 01:47:31.309744 extend-filesystems[2775]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 17 01:47:31.336698 extend-filesystems[2754]: Resized filesystem in /dev/nvme0n1p9 May 17 01:47:31.312748 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 01:47:31.313139 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 01:47:32.477219 systemd-networkd[2320]: bond0: Gained IPv6LL May 17 01:47:32.479472 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 01:47:32.484864 systemd[1]: Reached target network-online.target - Network is Online. May 17 01:47:32.503341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:47:32.509706 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 01:47:32.530409 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 01:47:33.191596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:47:33.197501 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 01:47:33.633052 kubelet[2910]: E0517 01:47:33.632976 2910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:47:33.635249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:47:33.635429 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:47:33.353031 systemd-resolved[2695]: Clock change detected. Flushing caches. May 17 01:47:33.365654 systemd-journald[2243]: Time jumped backwards, rotating. May 17 01:47:33.353160 systemd-timesyncd[2743]: Contacted time server 12.203.31.102:123 (0.flatcar.pool.ntp.org). May 17 01:47:33.353210 systemd-timesyncd[2743]: Initial clock synchronization to Sat 2025-05-17 01:47:33.352981 UTC. May 17 01:47:33.614323 coreos-metadata[2748]: May 17 01:47:33.614 INFO Fetch successful May 17 01:47:33.682152 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 01:47:33.689253 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 17 01:47:33.735266 coreos-metadata[2834]: May 17 01:47:33.735 INFO Fetch successful May 17 01:47:33.785730 unknown[2834]: wrote ssh authorized keys file for user: core May 17 01:47:33.799076 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 17 01:47:33.799365 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 17 01:47:33.820657 update-ssh-keys[2945]: Updated "/home/core/.ssh/authorized_keys" May 17 01:47:33.821786 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 01:47:33.828599 systemd[1]: Finished sshkeys.service. May 17 01:47:33.916899 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 01:47:33.933262 systemd[1]: Started sshd@0-147.28.150.2:22-147.75.109.163:44876.service - OpenSSH per-connection server daemon (147.75.109.163:44876). May 17 01:47:34.119252 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 17 01:47:34.124640 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 01:47:34.129714 systemd[1]: Startup finished in 22.334s (kernel) + 9.165s (userspace) = 31.499s. May 17 01:47:34.155930 login[2877]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 17 01:47:34.156346 login[2876]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:34.164161 systemd-logind[2777]: New session 2 of user core. May 17 01:47:34.165015 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 01:47:34.176367 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 01:47:34.184498 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 01:47:34.186336 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 01:47:34.192505 (systemd)[2969]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:34.287868 systemd[2969]: Queued start job for default target default.target. May 17 01:47:34.288214 systemd[2969]: Created slice app.slice - User Application Slice. May 17 01:47:34.288234 systemd[2969]: Reached target paths.target - Paths. May 17 01:47:34.288245 systemd[2969]: Reached target timers.target - Timers. May 17 01:47:34.299140 systemd[2969]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 01:47:34.304284 systemd[2969]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 01:47:34.304332 systemd[2969]: Reached target sockets.target - Sockets. May 17 01:47:34.304344 systemd[2969]: Reached target basic.target - Basic System. May 17 01:47:34.304382 systemd[2969]: Reached target default.target - Main User Target. May 17 01:47:34.304403 systemd[2969]: Startup finished in 107ms. May 17 01:47:34.304817 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 01:47:34.306027 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 01:47:34.348809 sshd[2956]: Accepted publickey for core from 147.75.109.163 port 44876 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:34.350043 sshd[2956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:34.352833 systemd-logind[2777]: New session 3 of user core. May 17 01:47:34.363359 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 01:47:34.707334 systemd[1]: Started sshd@1-147.28.150.2:22-147.75.109.163:44884.service - OpenSSH per-connection server daemon (147.75.109.163:44884). May 17 01:47:35.121495 sshd[2997]: Accepted publickey for core from 147.75.109.163 port 44884 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:35.122546 sshd[2997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:35.125209 systemd-logind[2777]: New session 4 of user core. May 17 01:47:35.136255 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 01:47:35.157365 login[2877]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:35.160051 systemd-logind[2777]: New session 1 of user core. May 17 01:47:35.168367 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 01:47:35.413874 sshd[2997]: pam_unix(sshd:session): session closed for user core May 17 01:47:35.416367 systemd[1]: sshd@1-147.28.150.2:22-147.75.109.163:44884.service: Deactivated successfully. May 17 01:47:35.417937 systemd-logind[2777]: Session 4 logged out. Waiting for processes to exit. May 17 01:47:35.418037 systemd[1]: session-4.scope: Deactivated successfully. May 17 01:47:35.418678 systemd-logind[2777]: Removed session 4. May 17 01:47:35.485254 systemd[1]: Started sshd@2-147.28.150.2:22-147.75.109.163:44888.service - OpenSSH per-connection server daemon (147.75.109.163:44888). May 17 01:47:35.901287 sshd[3014]: Accepted publickey for core from 147.75.109.163 port 44888 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:35.902307 sshd[3014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:35.904936 systemd-logind[2777]: New session 5 of user core. May 17 01:47:35.914251 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 01:47:36.193545 sshd[3014]: pam_unix(sshd:session): session closed for user core May 17 01:47:36.196979 systemd[1]: sshd@2-147.28.150.2:22-147.75.109.163:44888.service: Deactivated successfully. May 17 01:47:36.198800 systemd-logind[2777]: Session 5 logged out. Waiting for processes to exit. May 17 01:47:36.198984 systemd[1]: session-5.scope: Deactivated successfully. May 17 01:47:36.199686 systemd-logind[2777]: Removed session 5. May 17 01:47:36.264254 systemd[1]: Started sshd@3-147.28.150.2:22-147.75.109.163:44904.service - OpenSSH per-connection server daemon (147.75.109.163:44904). May 17 01:47:36.673187 sshd[3022]: Accepted publickey for core from 147.75.109.163 port 44904 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:36.674169 sshd[3022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:36.676718 systemd-logind[2777]: New session 6 of user core. May 17 01:47:36.687333 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 01:47:36.965481 sshd[3022]: pam_unix(sshd:session): session closed for user core May 17 01:47:36.967985 systemd[1]: sshd@3-147.28.150.2:22-147.75.109.163:44904.service: Deactivated successfully. May 17 01:47:36.969526 systemd-logind[2777]: Session 6 logged out. Waiting for processes to exit. May 17 01:47:36.969630 systemd[1]: session-6.scope: Deactivated successfully. May 17 01:47:36.970248 systemd-logind[2777]: Removed session 6. May 17 01:47:37.036253 systemd[1]: Started sshd@4-147.28.150.2:22-147.75.109.163:44920.service - OpenSSH per-connection server daemon (147.75.109.163:44920). May 17 01:47:37.450559 sshd[3030]: Accepted publickey for core from 147.75.109.163 port 44920 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:37.451548 sshd[3030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:37.454030 systemd-logind[2777]: New session 7 of user core. May 17 01:47:37.467244 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 01:47:37.689472 sudo[3034]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 01:47:37.689740 sudo[3034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:47:37.702951 sudo[3034]: pam_unix(sudo:session): session closed for user root May 17 01:47:37.765716 sshd[3030]: pam_unix(sshd:session): session closed for user core May 17 01:47:37.769562 systemd[1]: sshd@4-147.28.150.2:22-147.75.109.163:44920.service: Deactivated successfully. May 17 01:47:37.771472 systemd-logind[2777]: Session 7 logged out. Waiting for processes to exit. May 17 01:47:37.771589 systemd[1]: session-7.scope: Deactivated successfully. May 17 01:47:37.772389 systemd-logind[2777]: Removed session 7. May 17 01:47:37.836253 systemd[1]: Started sshd@5-147.28.150.2:22-147.75.109.163:44936.service - OpenSSH per-connection server daemon (147.75.109.163:44936). May 17 01:47:38.252896 sshd[3040]: Accepted publickey for core from 147.75.109.163 port 44936 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:38.254022 sshd[3040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:38.256716 systemd-logind[2777]: New session 8 of user core. May 17 01:47:38.269256 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 01:47:38.486806 sudo[3045]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 01:47:38.487095 sudo[3045]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:47:38.489743 sudo[3045]: pam_unix(sudo:session): session closed for user root May 17 01:47:38.494064 sudo[3044]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 01:47:38.494334 sudo[3044]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:47:38.515252 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 01:47:38.516470 auditctl[3048]: No rules May 17 01:47:38.517283 systemd[1]: audit-rules.service: Deactivated successfully. May 17 01:47:38.517495 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 01:47:38.519173 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 01:47:38.542063 augenrules[3067]: No rules May 17 01:47:38.543292 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 01:47:38.544123 sudo[3044]: pam_unix(sudo:session): session closed for user root May 17 01:47:38.606813 sshd[3040]: pam_unix(sshd:session): session closed for user core May 17 01:47:38.609396 systemd[1]: sshd@5-147.28.150.2:22-147.75.109.163:44936.service: Deactivated successfully. May 17 01:47:38.610943 systemd-logind[2777]: Session 8 logged out. Waiting for processes to exit. May 17 01:47:38.611038 systemd[1]: session-8.scope: Deactivated successfully. May 17 01:47:38.611678 systemd-logind[2777]: Removed session 8. May 17 01:47:38.678350 systemd[1]: Started sshd@6-147.28.150.2:22-147.75.109.163:42586.service - OpenSSH per-connection server daemon (147.75.109.163:42586). May 17 01:47:39.092550 sshd[3076]: Accepted publickey for core from 147.75.109.163 port 42586 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:47:39.093588 sshd[3076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:47:39.096157 systemd-logind[2777]: New session 9 of user core. May 17 01:47:39.104333 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 01:47:39.325715 sudo[3080]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 01:47:39.325996 sudo[3080]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:47:39.581350 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 01:47:39.581607 (dockerd)[3111]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 01:47:39.782873 dockerd[3111]: time="2025-05-17T01:47:39.782826545Z" level=info msg="Starting up" May 17 01:47:39.946604 dockerd[3111]: time="2025-05-17T01:47:39.946538345Z" level=info msg="Loading containers: start." May 17 01:47:40.036084 kernel: Initializing XFRM netlink socket May 17 01:47:40.099502 systemd-networkd[2320]: docker0: Link UP May 17 01:47:40.118160 dockerd[3111]: time="2025-05-17T01:47:40.118126625Z" level=info msg="Loading containers: done." May 17 01:47:40.126676 dockerd[3111]: time="2025-05-17T01:47:40.126643225Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 01:47:40.126735 dockerd[3111]: time="2025-05-17T01:47:40.126720865Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 01:47:40.126836 dockerd[3111]: time="2025-05-17T01:47:40.126820825Z" level=info msg="Daemon has completed initialization" May 17 01:47:40.145491 dockerd[3111]: time="2025-05-17T01:47:40.145379625Z" level=info msg="API listen on /run/docker.sock" May 17 01:47:40.145658 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 01:47:40.727510 containerd[2793]: time="2025-05-17T01:47:40.727475385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 01:47:40.836919 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2256403670-merged.mount: Deactivated successfully. May 17 01:47:41.273777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226936828.mount: Deactivated successfully. May 17 01:47:42.419791 containerd[2793]: time="2025-05-17T01:47:42.419745785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:42.420201 containerd[2793]: time="2025-05-17T01:47:42.419759745Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651974" May 17 01:47:42.420884 containerd[2793]: time="2025-05-17T01:47:42.420857345Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:42.423697 containerd[2793]: time="2025-05-17T01:47:42.423673265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:42.424793 containerd[2793]: time="2025-05-17T01:47:42.424761905Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.69723708s" May 17 01:47:42.424822 containerd[2793]: time="2025-05-17T01:47:42.424803385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 01:47:42.426042 containerd[2793]: time="2025-05-17T01:47:42.426022545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 01:47:43.385258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 01:47:43.394286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:47:43.499294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:47:43.502821 (kubelet)[3389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 01:47:43.538942 kubelet[3389]: E0517 01:47:43.538908 3389 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:47:43.541844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:47:43.542006 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:47:43.805157 containerd[2793]: time="2025-05-17T01:47:43.805086665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:43.805388 containerd[2793]: time="2025-05-17T01:47:43.805098585Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459528" May 17 01:47:43.806114 containerd[2793]: time="2025-05-17T01:47:43.806092145Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:43.809214 containerd[2793]: time="2025-05-17T01:47:43.809186945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:43.810252 containerd[2793]: time="2025-05-17T01:47:43.810232865Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.38417916s" May 17 01:47:43.810297 containerd[2793]: time="2025-05-17T01:47:43.810257105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 01:47:43.810686 containerd[2793]: time="2025-05-17T01:47:43.810666825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 01:47:44.830847 containerd[2793]: time="2025-05-17T01:47:44.830810585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:44.831062 containerd[2793]: time="2025-05-17T01:47:44.830878745Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125279" May 17 01:47:44.831921 containerd[2793]: time="2025-05-17T01:47:44.831894825Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:44.834748 containerd[2793]: time="2025-05-17T01:47:44.834722745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:44.835861 containerd[2793]: time="2025-05-17T01:47:44.835832185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.02513316s" May 17 01:47:44.835887 containerd[2793]: time="2025-05-17T01:47:44.835868305Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 01:47:44.836278 containerd[2793]: time="2025-05-17T01:47:44.836256585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 01:47:45.443961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465112120.mount: Deactivated successfully. May 17 01:47:45.794429 containerd[2793]: time="2025-05-17T01:47:45.794309905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:45.794429 containerd[2793]: time="2025-05-17T01:47:45.794375945Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871375" May 17 01:47:45.795093 containerd[2793]: time="2025-05-17T01:47:45.795068585Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:45.798783 containerd[2793]: time="2025-05-17T01:47:45.798749345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:45.799392 containerd[2793]: time="2025-05-17T01:47:45.799361825Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 963.07044ms" May 17 01:47:45.799422 containerd[2793]: time="2025-05-17T01:47:45.799407585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 01:47:45.799784 containerd[2793]: time="2025-05-17T01:47:45.799767145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 01:47:46.173199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520109473.mount: Deactivated successfully. May 17 01:47:47.258210 containerd[2793]: time="2025-05-17T01:47:47.258173825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:47.258531 containerd[2793]: time="2025-05-17T01:47:47.258238905Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 17 01:47:47.259363 containerd[2793]: time="2025-05-17T01:47:47.259337625Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:47.264135 containerd[2793]: time="2025-05-17T01:47:47.264099945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:47.265331 containerd[2793]: time="2025-05-17T01:47:47.265235345Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.46543276s" May 17 01:47:47.265331 containerd[2793]: time="2025-05-17T01:47:47.265284705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 01:47:47.265709 containerd[2793]: time="2025-05-17T01:47:47.265686985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 01:47:47.523181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851021405.mount: Deactivated successfully. May 17 01:47:47.523482 containerd[2793]: time="2025-05-17T01:47:47.523392625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:47.523532 containerd[2793]: time="2025-05-17T01:47:47.523504025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 17 01:47:47.524275 containerd[2793]: time="2025-05-17T01:47:47.524255905Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:47.526215 containerd[2793]: time="2025-05-17T01:47:47.526194745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:47.527126 containerd[2793]: time="2025-05-17T01:47:47.527099025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 261.37876ms" May 17 01:47:47.527153 containerd[2793]: time="2025-05-17T01:47:47.527131585Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 01:47:47.527434 containerd[2793]: time="2025-05-17T01:47:47.527414265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 01:47:47.851344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292412163.mount: Deactivated successfully. May 17 01:47:50.838681 containerd[2793]: time="2025-05-17T01:47:50.838635825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:50.839136 containerd[2793]: time="2025-05-17T01:47:50.838636745Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 17 01:47:50.839816 containerd[2793]: time="2025-05-17T01:47:50.839795025Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:50.843036 containerd[2793]: time="2025-05-17T01:47:50.843010425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:47:50.844320 containerd[2793]: time="2025-05-17T01:47:50.844289985Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.31684484s" May 17 01:47:50.844370 containerd[2793]: time="2025-05-17T01:47:50.844325945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 01:47:53.753574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 01:47:53.764271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:47:53.868429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:47:53.871982 (kubelet)[3634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 01:47:53.902570 kubelet[3634]: E0517 01:47:53.902534 3634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:47:53.904777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:47:53.904938 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:47:54.651255 systemd[1]: Started sshd@7-147.28.150.2:22-218.92.0.158:35794.service - OpenSSH per-connection server daemon (218.92.0.158:35794). May 17 01:47:55.618728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:47:55.629245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:47:55.651251 systemd[1]: Reloading requested from client PID 3665 ('systemctl') (unit session-9.scope)... May 17 01:47:55.651266 systemd[1]: Reloading... May 17 01:47:55.719083 zram_generator::config[3709]: No configuration found. May 17 01:47:55.815580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:47:55.887254 systemd[1]: Reloading finished in 235 ms. May 17 01:47:55.924839 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 01:47:55.924897 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 01:47:55.925137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:47:55.927404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:47:56.040544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:47:56.044320 (kubelet)[3785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 01:47:56.076074 kubelet[3785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:47:56.076074 kubelet[3785]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 01:47:56.076074 kubelet[3785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:47:56.076265 kubelet[3785]: I0517 01:47:56.076131 3785 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:47:56.335000 sshd[3800]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:47:57.171867 kubelet[3785]: I0517 01:47:57.171835 3785 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 01:47:57.171867 kubelet[3785]: I0517 01:47:57.171862 3785 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:47:57.172178 kubelet[3785]: I0517 01:47:57.172066 3785 server.go:934] "Client rotation is on, will bootstrap in background" May 17 01:47:57.193149 kubelet[3785]: E0517 01:47:57.193121 3785 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.150.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:57.194038 kubelet[3785]: I0517 01:47:57.194021 3785 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:47:57.199679 kubelet[3785]: E0517 01:47:57.199655 3785 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:47:57.199711 kubelet[3785]: I0517 01:47:57.199680 3785 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:47:57.220298 kubelet[3785]: I0517 01:47:57.220274 3785 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:47:57.221256 kubelet[3785]: I0517 01:47:57.221235 3785 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 01:47:57.221425 kubelet[3785]: I0517 01:47:57.221398 3785 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:47:57.221582 kubelet[3785]: I0517 01:47:57.221425 3785 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-a9b446c9a0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 01:47:57.221655 kubelet[3785]: I0517 01:47:57.221648 3785 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:47:57.221680 kubelet[3785]: I0517 01:47:57.221657 3785 container_manager_linux.go:300] "Creating device plugin manager" May 17 01:47:57.221903 kubelet[3785]: I0517 01:47:57.221891 3785 state_mem.go:36] "Initialized new in-memory state store" May 17 01:47:57.238207 kubelet[3785]: I0517 01:47:57.238188 3785 kubelet.go:408] "Attempting to sync node with API server" May 17 01:47:57.238234 kubelet[3785]: I0517 01:47:57.238214 3785 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:47:57.238253 kubelet[3785]: I0517 01:47:57.238237 3785 kubelet.go:314] "Adding apiserver pod source" May 17 01:47:57.238320 kubelet[3785]: I0517 01:47:57.238311 3785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:47:57.240746 kubelet[3785]: W0517 01:47:57.240701 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-a9b446c9a0&limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:57.240768 kubelet[3785]: E0517 01:47:57.240759 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.150.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-a9b446c9a0&limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:57.241573 kubelet[3785]: W0517 01:47:57.241521 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.150.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:57.241602 kubelet[3785]: E0517 01:47:57.241584 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.150.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:57.242874 kubelet[3785]: I0517 01:47:57.242856 3785 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 01:47:57.243608 kubelet[3785]: I0517 01:47:57.243592 3785 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:47:57.243762 kubelet[3785]: W0517 01:47:57.243753 3785 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 01:47:57.244791 kubelet[3785]: I0517 01:47:57.244658 3785 server.go:1274] "Started kubelet" May 17 01:47:57.244899 kubelet[3785]: I0517 01:47:57.244861 3785 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:47:57.244899 kubelet[3785]: I0517 01:47:57.244864 3785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:47:57.245117 kubelet[3785]: I0517 01:47:57.245104 3785 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:47:57.245851 kubelet[3785]: I0517 01:47:57.245835 3785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:47:57.245929 kubelet[3785]: I0517 01:47:57.245908 3785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:47:57.246194 kubelet[3785]: E0517 01:47:57.246105 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:47:57.246474 kubelet[3785]: I0517 01:47:57.246457 3785 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 01:47:57.249662 kubelet[3785]: I0517 01:47:57.246518 3785 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 01:47:57.249743 kubelet[3785]: E0517 01:47:57.246555 3785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-a9b446c9a0?timeout=10s\": dial tcp 147.28.150.2:6443: connect: connection refused" interval="200ms" May 17 01:47:57.249848 kubelet[3785]: I0517 01:47:57.249812 3785 reconciler.go:26] "Reconciler: start to sync state" May 17 01:47:57.250025 kubelet[3785]: W0517 01:47:57.249957 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.150.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:57.250066 kubelet[3785]: E0517 01:47:57.250044 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.150.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:57.250127 kubelet[3785]: I0517 01:47:57.250110 3785 factory.go:221] Registration of the systemd container factory successfully May 17 01:47:57.250248 kubelet[3785]: I0517 01:47:57.250228 3785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:47:57.250989 kubelet[3785]: I0517 01:47:57.250966 3785 server.go:449] "Adding debug handlers to kubelet server" May 17 01:47:57.253371 kubelet[3785]: E0517 01:47:57.253334 3785 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:47:57.253460 kubelet[3785]: I0517 01:47:57.253446 3785 factory.go:221] Registration of the containerd container factory successfully May 17 01:47:57.254100 kubelet[3785]: E0517 01:47:57.253044 3785 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.150.2:6443/api/v1/namespaces/default/events\": dial tcp 147.28.150.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-a9b446c9a0.18402d500c2c9ba1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-a9b446c9a0,UID:ci-4081.3.3-n-a9b446c9a0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-a9b446c9a0,},FirstTimestamp:2025-05-17 01:47:57.244636065 +0000 UTC m=+1.197342841,LastTimestamp:2025-05-17 01:47:57.244636065 +0000 UTC m=+1.197342841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-a9b446c9a0,}" May 17 01:47:57.263352 kubelet[3785]: I0517 01:47:57.263319 3785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:47:57.264322 kubelet[3785]: I0517 01:47:57.264308 3785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:47:57.264344 kubelet[3785]: I0517 01:47:57.264323 3785 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 01:47:57.264344 kubelet[3785]: I0517 01:47:57.264343 3785 kubelet.go:2321] "Starting kubelet main sync loop" May 17 01:47:57.264400 kubelet[3785]: E0517 01:47:57.264384 3785 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:47:57.265688 kubelet[3785]: W0517 01:47:57.265644 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.150.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:57.265731 kubelet[3785]: E0517 01:47:57.265702 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.150.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:57.269999 kubelet[3785]: I0517 01:47:57.269984 3785 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 01:47:57.269999 kubelet[3785]: I0517 01:47:57.269997 3785 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 01:47:57.270066 kubelet[3785]: I0517 01:47:57.270015 3785 state_mem.go:36] "Initialized new in-memory state store" May 17 01:47:57.270625 kubelet[3785]: I0517 01:47:57.270609 3785 policy_none.go:49] "None policy: Start" May 17 01:47:57.271062 kubelet[3785]: I0517 01:47:57.271047 3785 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 01:47:57.271113 kubelet[3785]: I0517 01:47:57.271075 3785 state_mem.go:35] "Initializing new in-memory state store" May 17 01:47:57.275103 kubelet[3785]: I0517 01:47:57.275083 3785 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:47:57.275278 kubelet[3785]: I0517 01:47:57.275267 3785 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:47:57.275305 kubelet[3785]: I0517 01:47:57.275279 3785 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:47:57.276371 kubelet[3785]: I0517 01:47:57.276328 3785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:47:57.277949 kubelet[3785]: E0517 01:47:57.277910 3785 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:47:57.377251 kubelet[3785]: I0517 01:47:57.377219 3785 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.377671 kubelet[3785]: E0517 01:47:57.377648 3785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.2:6443/api/v1/nodes\": dial tcp 147.28.150.2:6443: connect: connection refused" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450310 kubelet[3785]: E0517 01:47:57.450230 3785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-a9b446c9a0?timeout=10s\": dial tcp 147.28.150.2:6443: connect: connection refused" interval="400ms" May 17 01:47:57.450369 kubelet[3785]: I0517 01:47:57.450345 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c088b8866ef0a8a1df8282e6cc95aa2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" (UID: \"c088b8866ef0a8a1df8282e6cc95aa2f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450394 kubelet[3785]: I0517 01:47:57.450376 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450417 kubelet[3785]: I0517 01:47:57.450395 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c088b8866ef0a8a1df8282e6cc95aa2f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" (UID: \"c088b8866ef0a8a1df8282e6cc95aa2f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450417 kubelet[3785]: I0517 01:47:57.450414 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450462 kubelet[3785]: I0517 01:47:57.450445 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450514 kubelet[3785]: I0517 01:47:57.450499 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450540 kubelet[3785]: I0517 01:47:57.450524 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450561 kubelet[3785]: I0517 01:47:57.450543 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e219bc58532cb10b4c44cd353ea7f16-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6e219bc58532cb10b4c44cd353ea7f16\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.450586 kubelet[3785]: I0517 01:47:57.450559 3785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c088b8866ef0a8a1df8282e6cc95aa2f-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" (UID: \"c088b8866ef0a8a1df8282e6cc95aa2f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.579836 kubelet[3785]: I0517 01:47:57.579810 3785 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.580087 kubelet[3785]: E0517 01:47:57.580054 3785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.2:6443/api/v1/nodes\": dial tcp 147.28.150.2:6443: connect: connection refused" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.671170 containerd[2793]: time="2025-05-17T01:47:57.671060905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-a9b446c9a0,Uid:c088b8866ef0a8a1df8282e6cc95aa2f,Namespace:kube-system,Attempt:0,}" May 17 01:47:57.672422 containerd[2793]: time="2025-05-17T01:47:57.672400345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-a9b446c9a0,Uid:6eafb587951e190247c8a962ad17925d,Namespace:kube-system,Attempt:0,}" May 17 01:47:57.673909 containerd[2793]: time="2025-05-17T01:47:57.673889465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-a9b446c9a0,Uid:6e219bc58532cb10b4c44cd353ea7f16,Namespace:kube-system,Attempt:0,}" May 17 01:47:57.789367 sshd[3654]: PAM: Permission denied for root from 218.92.0.158 May 17 01:47:57.851317 kubelet[3785]: E0517 01:47:57.851280 3785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-a9b446c9a0?timeout=10s\": dial tcp 147.28.150.2:6443: connect: connection refused" interval="800ms" May 17 01:47:57.982511 kubelet[3785]: I0517 01:47:57.982492 3785 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:47:57.982782 kubelet[3785]: E0517 01:47:57.982752 3785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.2:6443/api/v1/nodes\": dial tcp 147.28.150.2:6443: connect: connection refused" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:47:58.060603 kubelet[3785]: W0517 01:47:58.060518 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.150.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:58.060603 kubelet[3785]: E0517 01:47:58.060574 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.150.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:58.249275 sshd[3838]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:47:58.253371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984829972.mount: Deactivated successfully. May 17 01:47:58.254132 containerd[2793]: time="2025-05-17T01:47:58.254101305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:47:58.254645 containerd[2793]: time="2025-05-17T01:47:58.254621505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 01:47:58.254694 containerd[2793]: time="2025-05-17T01:47:58.254673145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 17 01:47:58.254907 containerd[2793]: time="2025-05-17T01:47:58.254889345Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:47:58.255135 containerd[2793]: time="2025-05-17T01:47:58.255116505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 01:47:58.255591 containerd[2793]: time="2025-05-17T01:47:58.255574705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:47:58.258952 containerd[2793]: time="2025-05-17T01:47:58.258923305Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:47:58.259751 containerd[2793]: time="2025-05-17T01:47:58.259715705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.77192ms" May 17 01:47:58.261464 containerd[2793]: time="2025-05-17T01:47:58.261437705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:47:58.262182 containerd[2793]: time="2025-05-17T01:47:58.262164905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.01352ms" May 17 01:47:58.262832 containerd[2793]: time="2025-05-17T01:47:58.262810185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 590.35244ms" May 17 01:47:58.369695 containerd[2793]: time="2025-05-17T01:47:58.369583305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:47:58.369695 containerd[2793]: time="2025-05-17T01:47:58.369636825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:47:58.369695 containerd[2793]: time="2025-05-17T01:47:58.369649665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:47:58.369695 containerd[2793]: time="2025-05-17T01:47:58.369652025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:47:58.369815 containerd[2793]: time="2025-05-17T01:47:58.369706705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:47:58.369815 containerd[2793]: time="2025-05-17T01:47:58.369719185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:47:58.369815 containerd[2793]: time="2025-05-17T01:47:58.369765105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:47:58.369870 containerd[2793]: time="2025-05-17T01:47:58.369812385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:47:58.369870 containerd[2793]: time="2025-05-17T01:47:58.369823985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:47:58.370377 containerd[2793]: time="2025-05-17T01:47:58.370354185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:47:58.370377 containerd[2793]: time="2025-05-17T01:47:58.370352665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:47:58.370423 containerd[2793]: time="2025-05-17T01:47:58.370373025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:47:58.415318 containerd[2793]: time="2025-05-17T01:47:58.415279385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-a9b446c9a0,Uid:c088b8866ef0a8a1df8282e6cc95aa2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"302ad856537e7b627e58ad5c9ab24031f955f59c667781779a6ae8fccf35e12e\"" May 17 01:47:58.415409 containerd[2793]: time="2025-05-17T01:47:58.415326505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-a9b446c9a0,Uid:6e219bc58532cb10b4c44cd353ea7f16,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cc7f496539540c371b36fb0ea4297cb96a9722539d418f2575750625a5b1ed0\"" May 17 01:47:58.415455 containerd[2793]: time="2025-05-17T01:47:58.415398145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-a9b446c9a0,Uid:6eafb587951e190247c8a962ad17925d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a66074d4bbe8f4bfbe217c73e1df2b49bbc61f7b1fb146850fd9cc9de25aa00f\"" May 17 01:47:58.418816 containerd[2793]: time="2025-05-17T01:47:58.418789025Z" level=info msg="CreateContainer within sandbox \"a66074d4bbe8f4bfbe217c73e1df2b49bbc61f7b1fb146850fd9cc9de25aa00f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 01:47:58.418867 containerd[2793]: time="2025-05-17T01:47:58.418844905Z" level=info msg="CreateContainer within sandbox \"5cc7f496539540c371b36fb0ea4297cb96a9722539d418f2575750625a5b1ed0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 01:47:58.418888 containerd[2793]: time="2025-05-17T01:47:58.418803265Z" level=info msg="CreateContainer within sandbox \"302ad856537e7b627e58ad5c9ab24031f955f59c667781779a6ae8fccf35e12e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 01:47:58.424393 containerd[2793]: time="2025-05-17T01:47:58.424366865Z" level=info msg="CreateContainer within sandbox \"a66074d4bbe8f4bfbe217c73e1df2b49bbc61f7b1fb146850fd9cc9de25aa00f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29095d9a8577129b34d3244e9b72833c0be227a099010796b3b76b1218648624\"" May 17 01:47:58.424824 containerd[2793]: time="2025-05-17T01:47:58.424798225Z" level=info msg="StartContainer for \"29095d9a8577129b34d3244e9b72833c0be227a099010796b3b76b1218648624\"" May 17 01:47:58.424858 containerd[2793]: time="2025-05-17T01:47:58.424833345Z" level=info msg="CreateContainer within sandbox \"302ad856537e7b627e58ad5c9ab24031f955f59c667781779a6ae8fccf35e12e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e4273236d4ca8b71ec5d96ab0b38cd1a1059d05bfa242e72d9ceee337b58eb67\"" May 17 01:47:58.425020 containerd[2793]: time="2025-05-17T01:47:58.424994105Z" level=info msg="CreateContainer within sandbox \"5cc7f496539540c371b36fb0ea4297cb96a9722539d418f2575750625a5b1ed0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f242944f3996a72897c4e419e6221424d4009cc3a82647e4ca24dd1614c8f904\"" May 17 01:47:58.425222 containerd[2793]: time="2025-05-17T01:47:58.425201585Z" level=info msg="StartContainer for \"e4273236d4ca8b71ec5d96ab0b38cd1a1059d05bfa242e72d9ceee337b58eb67\"" May 17 01:47:58.425257 containerd[2793]: time="2025-05-17T01:47:58.425240905Z" level=info msg="StartContainer for \"f242944f3996a72897c4e419e6221424d4009cc3a82647e4ca24dd1614c8f904\"" May 17 01:47:58.434732 kubelet[3785]: W0517 01:47:58.434685 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-a9b446c9a0&limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:58.435010 kubelet[3785]: E0517 01:47:58.434749 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.150.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-a9b446c9a0&limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:58.482848 containerd[2793]: time="2025-05-17T01:47:58.482809225Z" level=info msg="StartContainer for \"f242944f3996a72897c4e419e6221424d4009cc3a82647e4ca24dd1614c8f904\" returns successfully" May 17 01:47:58.482917 containerd[2793]: time="2025-05-17T01:47:58.482815545Z" level=info msg="StartContainer for \"e4273236d4ca8b71ec5d96ab0b38cd1a1059d05bfa242e72d9ceee337b58eb67\" returns successfully" May 17 01:47:58.483222 containerd[2793]: time="2025-05-17T01:47:58.483195345Z" level=info msg="StartContainer for \"29095d9a8577129b34d3244e9b72833c0be227a099010796b3b76b1218648624\" returns successfully" May 17 01:47:58.516932 kubelet[3785]: W0517 01:47:58.516882 3785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.150.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.150.2:6443: connect: connection refused May 17 01:47:58.517031 kubelet[3785]: E0517 01:47:58.516953 3785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.150.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.150.2:6443: connect: connection refused" logger="UnhandledError" May 17 01:47:58.785530 kubelet[3785]: I0517 01:47:58.785508 3785 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:00.097273 kubelet[3785]: E0517 01:48:00.097238 3785 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-a9b446c9a0\" not found" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:00.199174 kubelet[3785]: I0517 01:48:00.199138 3785 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:00.199174 kubelet[3785]: E0517 01:48:00.199171 3785 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-a9b446c9a0\": node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.212083 kubelet[3785]: E0517 01:48:00.211125 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.310933 sshd[3654]: PAM: Permission denied for root from 218.92.0.158 May 17 01:48:00.311346 kubelet[3785]: E0517 01:48:00.311305 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.412112 kubelet[3785]: E0517 01:48:00.412018 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.512967 kubelet[3785]: E0517 01:48:00.512940 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.613609 kubelet[3785]: E0517 01:48:00.613595 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.714440 kubelet[3785]: E0517 01:48:00.714371 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.770711 sshd[4207]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:48:00.814664 kubelet[3785]: E0517 01:48:00.814645 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:00.914896 kubelet[3785]: E0517 01:48:00.914877 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:01.015567 kubelet[3785]: E0517 01:48:01.015545 3785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:01.241225 kubelet[3785]: I0517 01:48:01.241205 3785 apiserver.go:52] "Watching apiserver" May 17 01:48:01.250292 kubelet[3785]: I0517 01:48:01.250274 3785 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 01:48:01.467561 kubelet[3785]: W0517 01:48:01.467493 3785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:48:01.814284 kubelet[3785]: W0517 01:48:01.814258 3785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:48:01.975925 systemd[1]: Reloading requested from client PID 4209 ('systemctl') (unit session-9.scope)... May 17 01:48:01.975940 systemd[1]: Reloading... May 17 01:48:02.034088 zram_generator::config[4253]: No configuration found. May 17 01:48:02.130575 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:48:02.208425 systemd[1]: Reloading finished in 232 ms. May 17 01:48:02.231777 kubelet[3785]: I0517 01:48:02.231733 3785 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:48:02.231782 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:48:02.241849 systemd[1]: kubelet.service: Deactivated successfully. May 17 01:48:02.242154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:48:02.253376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:48:02.354646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:48:02.358639 (kubelet)[4322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 01:48:02.388384 kubelet[4322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:48:02.388384 kubelet[4322]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 01:48:02.388384 kubelet[4322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:48:02.388669 kubelet[4322]: I0517 01:48:02.388375 4322 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:48:02.393175 kubelet[4322]: I0517 01:48:02.393150 4322 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 01:48:02.393175 kubelet[4322]: I0517 01:48:02.393172 4322 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:48:02.393384 kubelet[4322]: I0517 01:48:02.393374 4322 server.go:934] "Client rotation is on, will bootstrap in background" May 17 01:48:02.394686 kubelet[4322]: I0517 01:48:02.394673 4322 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 01:48:02.396529 kubelet[4322]: I0517 01:48:02.396507 4322 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:48:02.400011 kubelet[4322]: E0517 01:48:02.399986 4322 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:48:02.400042 kubelet[4322]: I0517 01:48:02.400012 4322 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:48:02.418828 kubelet[4322]: I0517 01:48:02.418804 4322 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:48:02.419185 kubelet[4322]: I0517 01:48:02.419171 4322 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 01:48:02.419293 kubelet[4322]: I0517 01:48:02.419271 4322 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:48:02.419460 kubelet[4322]: I0517 01:48:02.419293 4322 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-a9b446c9a0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 01:48:02.419537 kubelet[4322]: I0517 01:48:02.419467 4322 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:48:02.419537 kubelet[4322]: I0517 01:48:02.419478 4322 container_manager_linux.go:300] "Creating device plugin manager" May 17 01:48:02.419537 kubelet[4322]: I0517 01:48:02.419510 4322 state_mem.go:36] "Initialized new in-memory state store" May 17 01:48:02.419602 kubelet[4322]: I0517 01:48:02.419592 4322 kubelet.go:408] "Attempting to sync node with API server" May 17 01:48:02.419624 kubelet[4322]: I0517 01:48:02.419605 4322 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:48:02.419624 kubelet[4322]: I0517 01:48:02.419622 4322 kubelet.go:314] "Adding apiserver pod source" May 17 01:48:02.419659 kubelet[4322]: I0517 01:48:02.419634 4322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:48:02.419996 kubelet[4322]: I0517 01:48:02.419982 4322 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 01:48:02.420421 kubelet[4322]: I0517 01:48:02.420408 4322 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:48:02.420775 kubelet[4322]: I0517 01:48:02.420763 4322 server.go:1274] "Started kubelet" May 17 01:48:02.420861 kubelet[4322]: I0517 01:48:02.420795 4322 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:48:02.420882 kubelet[4322]: I0517 01:48:02.420851 4322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:48:02.421043 kubelet[4322]: I0517 01:48:02.421031 4322 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:48:02.421706 kubelet[4322]: I0517 01:48:02.421690 4322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:48:02.421730 kubelet[4322]: I0517 01:48:02.421701 4322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:48:02.421766 kubelet[4322]: I0517 01:48:02.421753 4322 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 01:48:02.421801 kubelet[4322]: I0517 01:48:02.421783 4322 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 01:48:02.421830 kubelet[4322]: E0517 01:48:02.421803 4322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-a9b446c9a0\" not found" May 17 01:48:02.421920 kubelet[4322]: I0517 01:48:02.421908 4322 reconciler.go:26] "Reconciler: start to sync state" May 17 01:48:02.422237 kubelet[4322]: E0517 01:48:02.422212 4322 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:48:02.422366 kubelet[4322]: I0517 01:48:02.422352 4322 factory.go:221] Registration of the systemd container factory successfully May 17 01:48:02.422467 kubelet[4322]: I0517 01:48:02.422451 4322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:48:02.423439 kubelet[4322]: I0517 01:48:02.423421 4322 server.go:449] "Adding debug handlers to kubelet server" May 17 01:48:02.423772 kubelet[4322]: I0517 01:48:02.423762 4322 factory.go:221] Registration of the containerd container factory successfully May 17 01:48:02.428850 kubelet[4322]: I0517 01:48:02.428823 4322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:48:02.429819 kubelet[4322]: I0517 01:48:02.429796 4322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:48:02.429839 kubelet[4322]: I0517 01:48:02.429824 4322 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 01:48:02.429858 kubelet[4322]: I0517 01:48:02.429843 4322 kubelet.go:2321] "Starting kubelet main sync loop" May 17 01:48:02.429913 kubelet[4322]: E0517 01:48:02.429890 4322 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:48:02.464432 kubelet[4322]: I0517 01:48:02.464405 4322 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 01:48:02.464432 kubelet[4322]: I0517 01:48:02.464429 4322 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 01:48:02.464526 kubelet[4322]: I0517 01:48:02.464449 4322 state_mem.go:36] "Initialized new in-memory state store" May 17 01:48:02.464617 kubelet[4322]: I0517 01:48:02.464606 4322 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 01:48:02.464638 kubelet[4322]: I0517 01:48:02.464618 4322 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 01:48:02.464638 kubelet[4322]: I0517 01:48:02.464636 4322 policy_none.go:49] "None policy: Start" May 17 01:48:02.465225 kubelet[4322]: I0517 01:48:02.465209 4322 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 01:48:02.465271 kubelet[4322]: I0517 01:48:02.465237 4322 state_mem.go:35] "Initializing new in-memory state store" May 17 01:48:02.465387 kubelet[4322]: I0517 01:48:02.465378 4322 state_mem.go:75] "Updated machine memory state" May 17 01:48:02.466446 kubelet[4322]: I0517 01:48:02.466428 4322 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:48:02.466598 kubelet[4322]: I0517 01:48:02.466585 4322 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:48:02.466623 kubelet[4322]: I0517 01:48:02.466598 4322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:48:02.466945 kubelet[4322]: I0517 01:48:02.466808 4322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:48:02.536771 kubelet[4322]: W0517 01:48:02.536747 4322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:48:02.536956 kubelet[4322]: W0517 01:48:02.536936 4322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:48:02.537004 kubelet[4322]: E0517 01:48:02.536987 4322 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.537058 kubelet[4322]: W0517 01:48:02.537042 4322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:48:02.537110 kubelet[4322]: E0517 01:48:02.537096 4322 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.569873 kubelet[4322]: I0517 01:48:02.569853 4322 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.583162 kubelet[4322]: I0517 01:48:02.583141 4322 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.583219 kubelet[4322]: I0517 01:48:02.583208 4322 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622687 kubelet[4322]: I0517 01:48:02.622662 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c088b8866ef0a8a1df8282e6cc95aa2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" (UID: \"c088b8866ef0a8a1df8282e6cc95aa2f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622740 kubelet[4322]: I0517 01:48:02.622690 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622740 kubelet[4322]: I0517 01:48:02.622710 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622740 kubelet[4322]: I0517 01:48:02.622725 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622815 kubelet[4322]: I0517 01:48:02.622743 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c088b8866ef0a8a1df8282e6cc95aa2f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" (UID: \"c088b8866ef0a8a1df8282e6cc95aa2f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622815 kubelet[4322]: I0517 01:48:02.622761 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622815 kubelet[4322]: I0517 01:48:02.622777 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eafb587951e190247c8a962ad17925d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6eafb587951e190247c8a962ad17925d\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622815 kubelet[4322]: I0517 01:48:02.622792 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e219bc58532cb10b4c44cd353ea7f16-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-a9b446c9a0\" (UID: \"6e219bc58532cb10b4c44cd353ea7f16\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:02.622815 kubelet[4322]: I0517 01:48:02.622806 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c088b8866ef0a8a1df8282e6cc95aa2f-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" (UID: \"c088b8866ef0a8a1df8282e6cc95aa2f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:03.108154 sshd[3654]: PAM: Permission denied for root from 218.92.0.158 May 17 01:48:03.329156 sshd[3654]: Received disconnect from 218.92.0.158 port 35794:11: [preauth] May 17 01:48:03.329156 sshd[3654]: Disconnected from authenticating user root 218.92.0.158 port 35794 [preauth] May 17 01:48:03.331169 systemd[1]: sshd@7-147.28.150.2:22-218.92.0.158:35794.service: Deactivated successfully. May 17 01:48:03.420464 kubelet[4322]: I0517 01:48:03.420372 4322 apiserver.go:52] "Watching apiserver" May 17 01:48:03.446923 kubelet[4322]: W0517 01:48:03.446893 4322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:48:03.447007 kubelet[4322]: E0517 01:48:03.446955 4322 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-a9b446c9a0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" May 17 01:48:03.458552 kubelet[4322]: I0517 01:48:03.458498 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-a9b446c9a0" podStartSLOduration=2.458466825 podStartE2EDuration="2.458466825s" podCreationTimestamp="2025-05-17 01:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:03.458444265 +0000 UTC m=+1.096886321" watchObservedRunningTime="2025-05-17 01:48:03.458466825 +0000 UTC m=+1.096908881" May 17 01:48:03.469528 kubelet[4322]: I0517 01:48:03.469488 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-a9b446c9a0" podStartSLOduration=2.469475145 podStartE2EDuration="2.469475145s" podCreationTimestamp="2025-05-17 01:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:03.465117225 +0000 UTC m=+1.103559281" watchObservedRunningTime="2025-05-17 01:48:03.469475145 +0000 UTC m=+1.107917201" May 17 01:48:03.469586 kubelet[4322]: I0517 01:48:03.469555 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-a9b446c9a0" podStartSLOduration=1.469550905 podStartE2EDuration="1.469550905s" podCreationTimestamp="2025-05-17 01:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:03.469548825 +0000 UTC m=+1.107990881" watchObservedRunningTime="2025-05-17 01:48:03.469550905 +0000 UTC m=+1.107992961" May 17 01:48:03.522731 kubelet[4322]: I0517 01:48:03.522698 4322 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 01:48:05.078327 systemd[1]: Started sshd@8-147.28.150.2:22-36.110.172.218:35910.service - OpenSSH per-connection server daemon (36.110.172.218:35910). May 17 01:48:06.243752 sshd[4469]: Invalid user user1 from 36.110.172.218 port 35910 May 17 01:48:06.463488 sshd[4469]: Received disconnect from 36.110.172.218 port 35910:11: Bye Bye [preauth] May 17 01:48:06.463488 sshd[4469]: Disconnected from invalid user user1 36.110.172.218 port 35910 [preauth] May 17 01:48:06.465303 systemd[1]: sshd@8-147.28.150.2:22-36.110.172.218:35910.service: Deactivated successfully. May 17 01:48:09.239563 kubelet[4322]: I0517 01:48:09.239533 4322 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 01:48:09.239850 containerd[2793]: time="2025-05-17T01:48:09.239818346Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 01:48:09.240021 kubelet[4322]: I0517 01:48:09.239974 4322 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 01:48:10.366543 kubelet[4322]: I0517 01:48:10.366504 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb526220-de03-4a09-a703-7dbf8d5f0a7f-kube-proxy\") pod \"kube-proxy-wnn6j\" (UID: \"fb526220-de03-4a09-a703-7dbf8d5f0a7f\") " pod="kube-system/kube-proxy-wnn6j" May 17 01:48:10.366932 kubelet[4322]: I0517 01:48:10.366564 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb526220-de03-4a09-a703-7dbf8d5f0a7f-xtables-lock\") pod \"kube-proxy-wnn6j\" (UID: \"fb526220-de03-4a09-a703-7dbf8d5f0a7f\") " pod="kube-system/kube-proxy-wnn6j" May 17 01:48:10.366932 kubelet[4322]: I0517 01:48:10.366600 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb526220-de03-4a09-a703-7dbf8d5f0a7f-lib-modules\") pod \"kube-proxy-wnn6j\" (UID: \"fb526220-de03-4a09-a703-7dbf8d5f0a7f\") " pod="kube-system/kube-proxy-wnn6j" May 17 01:48:10.366932 kubelet[4322]: I0517 01:48:10.366618 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nqr\" (UniqueName: \"kubernetes.io/projected/fb526220-de03-4a09-a703-7dbf8d5f0a7f-kube-api-access-k4nqr\") pod \"kube-proxy-wnn6j\" (UID: \"fb526220-de03-4a09-a703-7dbf8d5f0a7f\") " pod="kube-system/kube-proxy-wnn6j" May 17 01:48:10.467693 kubelet[4322]: I0517 01:48:10.467658 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrq6\" (UniqueName: \"kubernetes.io/projected/542eb2cd-1ee3-4c02-9829-cd3a1761387b-kube-api-access-hcrq6\") pod \"tigera-operator-7c5755cdcb-lptkt\" (UID: \"542eb2cd-1ee3-4c02-9829-cd3a1761387b\") " pod="tigera-operator/tigera-operator-7c5755cdcb-lptkt" May 17 01:48:10.467771 kubelet[4322]: I0517 01:48:10.467695 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/542eb2cd-1ee3-4c02-9829-cd3a1761387b-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-lptkt\" (UID: \"542eb2cd-1ee3-4c02-9829-cd3a1761387b\") " pod="tigera-operator/tigera-operator-7c5755cdcb-lptkt" May 17 01:48:10.488672 containerd[2793]: time="2025-05-17T01:48:10.488631246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnn6j,Uid:fb526220-de03-4a09-a703-7dbf8d5f0a7f,Namespace:kube-system,Attempt:0,}" May 17 01:48:10.500782 containerd[2793]: time="2025-05-17T01:48:10.500430568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:10.500825 containerd[2793]: time="2025-05-17T01:48:10.500808090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:10.500860 containerd[2793]: time="2025-05-17T01:48:10.500822170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:10.500943 containerd[2793]: time="2025-05-17T01:48:10.500928851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:10.532857 containerd[2793]: time="2025-05-17T01:48:10.532820952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnn6j,Uid:fb526220-de03-4a09-a703-7dbf8d5f0a7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"327ff67ebacdba7aca9bebce4aaa5f59c608908023db9e9b6c2873d6354d4164\"" May 17 01:48:10.534712 containerd[2793]: time="2025-05-17T01:48:10.534689245Z" level=info msg="CreateContainer within sandbox \"327ff67ebacdba7aca9bebce4aaa5f59c608908023db9e9b6c2873d6354d4164\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 01:48:10.539888 containerd[2793]: time="2025-05-17T01:48:10.539858801Z" level=info msg="CreateContainer within sandbox \"327ff67ebacdba7aca9bebce4aaa5f59c608908023db9e9b6c2873d6354d4164\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d40a058ad0718e6936cdaa41dbbf3b6f51a962aa68ef2982bdd7900e57e78bcd\"" May 17 01:48:10.540277 containerd[2793]: time="2025-05-17T01:48:10.540258004Z" level=info msg="StartContainer for \"d40a058ad0718e6936cdaa41dbbf3b6f51a962aa68ef2982bdd7900e57e78bcd\"" May 17 01:48:10.582246 containerd[2793]: time="2025-05-17T01:48:10.582220455Z" level=info msg="StartContainer for \"d40a058ad0718e6936cdaa41dbbf3b6f51a962aa68ef2982bdd7900e57e78bcd\" returns successfully" May 17 01:48:10.666139 containerd[2793]: time="2025-05-17T01:48:10.666064437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-lptkt,Uid:542eb2cd-1ee3-4c02-9829-cd3a1761387b,Namespace:tigera-operator,Attempt:0,}" May 17 01:48:10.678709 containerd[2793]: time="2025-05-17T01:48:10.678638725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:10.678709 containerd[2793]: time="2025-05-17T01:48:10.678689085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:10.678808 containerd[2793]: time="2025-05-17T01:48:10.678700365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:10.678808 containerd[2793]: time="2025-05-17T01:48:10.678785606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:10.725796 containerd[2793]: time="2025-05-17T01:48:10.725733131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-lptkt,Uid:542eb2cd-1ee3-4c02-9829-cd3a1761387b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e009795f38e770e2c0d4ab0054028bb85f626212ebf4ee9ffebd58c0ce075543\"" May 17 01:48:10.727039 containerd[2793]: time="2025-05-17T01:48:10.727020060Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 01:48:11.454064 kubelet[4322]: I0517 01:48:11.453976 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wnn6j" podStartSLOduration=1.453959069 podStartE2EDuration="1.453959069s" podCreationTimestamp="2025-05-17 01:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:11.453727348 +0000 UTC m=+9.092169404" watchObservedRunningTime="2025-05-17 01:48:11.453959069 +0000 UTC m=+9.092401125" May 17 01:48:11.672959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204335008.mount: Deactivated successfully. May 17 01:48:13.447547 containerd[2793]: time="2025-05-17T01:48:13.447507803Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:13.447882 containerd[2793]: time="2025-05-17T01:48:13.447571123Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 17 01:48:13.448269 containerd[2793]: time="2025-05-17T01:48:13.448239447Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:13.450325 containerd[2793]: time="2025-05-17T01:48:13.450277579Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:13.451087 containerd[2793]: time="2025-05-17T01:48:13.451056863Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.724008922s" May 17 01:48:13.451147 containerd[2793]: time="2025-05-17T01:48:13.451089663Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 01:48:13.452740 containerd[2793]: time="2025-05-17T01:48:13.452721313Z" level=info msg="CreateContainer within sandbox \"e009795f38e770e2c0d4ab0054028bb85f626212ebf4ee9ffebd58c0ce075543\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 01:48:13.456191 containerd[2793]: time="2025-05-17T01:48:13.456168693Z" level=info msg="CreateContainer within sandbox \"e009795f38e770e2c0d4ab0054028bb85f626212ebf4ee9ffebd58c0ce075543\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a309bf7a9fa8006307db4f13f089579e2a2d4567582d08f4406ca7e42b7c1672\"" May 17 01:48:13.456535 containerd[2793]: time="2025-05-17T01:48:13.456518895Z" level=info msg="StartContainer for \"a309bf7a9fa8006307db4f13f089579e2a2d4567582d08f4406ca7e42b7c1672\"" May 17 01:48:13.458207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572642677.mount: Deactivated successfully. May 17 01:48:13.496499 containerd[2793]: time="2025-05-17T01:48:13.496461123Z" level=info msg="StartContainer for \"a309bf7a9fa8006307db4f13f089579e2a2d4567582d08f4406ca7e42b7c1672\" returns successfully" May 17 01:48:14.458558 kubelet[4322]: I0517 01:48:14.458500 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-lptkt" podStartSLOduration=1.733382094 podStartE2EDuration="4.458486303s" podCreationTimestamp="2025-05-17 01:48:10 +0000 UTC" firstStartedPulling="2025-05-17 01:48:10.726615378 +0000 UTC m=+8.365057394" lastFinishedPulling="2025-05-17 01:48:13.451719547 +0000 UTC m=+11.090161603" observedRunningTime="2025-05-17 01:48:14.458304182 +0000 UTC m=+12.096746238" watchObservedRunningTime="2025-05-17 01:48:14.458486303 +0000 UTC m=+12.096928359" May 17 01:48:15.666057 update_engine[2783]: I20250517 01:48:15.665993 2783 update_attempter.cc:509] Updating boot flags... May 17 01:48:15.697085 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4948) May 17 01:48:15.727092 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4951) May 17 01:48:18.241829 sudo[3080]: pam_unix(sudo:session): session closed for user root May 17 01:48:18.304303 sshd[3076]: pam_unix(sshd:session): session closed for user core May 17 01:48:18.307512 systemd[1]: sshd@6-147.28.150.2:22-147.75.109.163:42586.service: Deactivated successfully. May 17 01:48:18.309427 systemd-logind[2777]: Session 9 logged out. Waiting for processes to exit. May 17 01:48:18.309515 systemd[1]: session-9.scope: Deactivated successfully. May 17 01:48:18.310356 systemd-logind[2777]: Removed session 9. May 17 01:48:23.440027 kubelet[4322]: I0517 01:48:23.439985 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d61159e-1b00-4142-988e-a81709890457-tigera-ca-bundle\") pod \"calico-typha-7479d9dc6-6xnqs\" (UID: \"3d61159e-1b00-4142-988e-a81709890457\") " pod="calico-system/calico-typha-7479d9dc6-6xnqs" May 17 01:48:23.440027 kubelet[4322]: I0517 01:48:23.440029 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbr76\" (UniqueName: \"kubernetes.io/projected/3d61159e-1b00-4142-988e-a81709890457-kube-api-access-jbr76\") pod \"calico-typha-7479d9dc6-6xnqs\" (UID: \"3d61159e-1b00-4142-988e-a81709890457\") " pod="calico-system/calico-typha-7479d9dc6-6xnqs" May 17 01:48:23.440469 kubelet[4322]: I0517 01:48:23.440062 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d61159e-1b00-4142-988e-a81709890457-typha-certs\") pod \"calico-typha-7479d9dc6-6xnqs\" (UID: \"3d61159e-1b00-4142-988e-a81709890457\") " pod="calico-system/calico-typha-7479d9dc6-6xnqs" May 17 01:48:23.580550 containerd[2793]: time="2025-05-17T01:48:23.580507064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7479d9dc6-6xnqs,Uid:3d61159e-1b00-4142-988e-a81709890457,Namespace:calico-system,Attempt:0,}" May 17 01:48:23.592842 containerd[2793]: time="2025-05-17T01:48:23.592726100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:23.592842 containerd[2793]: time="2025-05-17T01:48:23.592832941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:23.592923 containerd[2793]: time="2025-05-17T01:48:23.592844701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:23.592959 containerd[2793]: time="2025-05-17T01:48:23.592938741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:23.639217 containerd[2793]: time="2025-05-17T01:48:23.639180920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7479d9dc6-6xnqs,Uid:3d61159e-1b00-4142-988e-a81709890457,Namespace:calico-system,Attempt:0,} returns sandbox id \"95ea527890bec20f68f66b2f1dd8740ca9135c0dbfefe37167769fa6136dbee0\"" May 17 01:48:23.640188 containerd[2793]: time="2025-05-17T01:48:23.640168842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 01:48:23.742167 kubelet[4322]: I0517 01:48:23.742079 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d39ef1f-ad60-41f4-9508-db37c1d1098a-tigera-ca-bundle\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742167 kubelet[4322]: I0517 01:48:23.742133 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shzbf\" (UniqueName: \"kubernetes.io/projected/7d39ef1f-ad60-41f4-9508-db37c1d1098a-kube-api-access-shzbf\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742337 kubelet[4322]: I0517 01:48:23.742169 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-cni-net-dir\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742337 kubelet[4322]: I0517 01:48:23.742196 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-var-lib-calico\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742337 kubelet[4322]: I0517 01:48:23.742225 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-xtables-lock\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742337 kubelet[4322]: I0517 01:48:23.742255 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-policysync\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742337 kubelet[4322]: I0517 01:48:23.742281 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-flexvol-driver-host\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742442 kubelet[4322]: I0517 01:48:23.742305 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-lib-modules\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742442 kubelet[4322]: I0517 01:48:23.742319 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d39ef1f-ad60-41f4-9508-db37c1d1098a-node-certs\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742442 kubelet[4322]: I0517 01:48:23.742336 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-cni-log-dir\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742442 kubelet[4322]: I0517 01:48:23.742389 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-cni-bin-dir\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.742442 kubelet[4322]: I0517 01:48:23.742426 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d39ef1f-ad60-41f4-9508-db37c1d1098a-var-run-calico\") pod \"calico-node-p5bj2\" (UID: \"7d39ef1f-ad60-41f4-9508-db37c1d1098a\") " pod="calico-system/calico-node-p5bj2" May 17 01:48:23.822082 kubelet[4322]: E0517 01:48:23.822036 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcb6n" podUID="d016dc5a-5728-4bc3-95ba-213735e255c5" May 17 01:48:23.843605 kubelet[4322]: E0517 01:48:23.843585 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.843637 kubelet[4322]: W0517 01:48:23.843603 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.843637 kubelet[4322]: E0517 01:48:23.843626 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.845379 kubelet[4322]: E0517 01:48:23.845362 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.845409 kubelet[4322]: W0517 01:48:23.845377 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.845409 kubelet[4322]: E0517 01:48:23.845391 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.851348 kubelet[4322]: E0517 01:48:23.851333 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.851375 kubelet[4322]: W0517 01:48:23.851346 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.851375 kubelet[4322]: E0517 01:48:23.851359 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.878819 containerd[2793]: time="2025-05-17T01:48:23.878779678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p5bj2,Uid:7d39ef1f-ad60-41f4-9508-db37c1d1098a,Namespace:calico-system,Attempt:0,}" May 17 01:48:23.890655 containerd[2793]: time="2025-05-17T01:48:23.890290273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:23.890683 containerd[2793]: time="2025-05-17T01:48:23.890651154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:23.890683 containerd[2793]: time="2025-05-17T01:48:23.890665474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:23.890779 containerd[2793]: time="2025-05-17T01:48:23.890760194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:23.921027 containerd[2793]: time="2025-05-17T01:48:23.920993325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p5bj2,Uid:7d39ef1f-ad60-41f4-9508-db37c1d1098a,Namespace:calico-system,Attempt:0,} returns sandbox id \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\"" May 17 01:48:23.943206 kubelet[4322]: E0517 01:48:23.943181 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.943206 kubelet[4322]: W0517 01:48:23.943202 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.943310 kubelet[4322]: E0517 01:48:23.943221 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.943310 kubelet[4322]: I0517 01:48:23.943245 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpms4\" (UniqueName: \"kubernetes.io/projected/d016dc5a-5728-4bc3-95ba-213735e255c5-kube-api-access-xpms4\") pod \"csi-node-driver-hcb6n\" (UID: \"d016dc5a-5728-4bc3-95ba-213735e255c5\") " pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:23.943528 kubelet[4322]: E0517 01:48:23.943510 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.943528 kubelet[4322]: W0517 01:48:23.943522 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.943571 kubelet[4322]: E0517 01:48:23.943534 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.943571 kubelet[4322]: I0517 01:48:23.943548 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d016dc5a-5728-4bc3-95ba-213735e255c5-socket-dir\") pod \"csi-node-driver-hcb6n\" (UID: \"d016dc5a-5728-4bc3-95ba-213735e255c5\") " pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:23.943845 kubelet[4322]: E0517 01:48:23.943824 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.943870 kubelet[4322]: W0517 01:48:23.943844 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.943870 kubelet[4322]: E0517 01:48:23.943864 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.944134 kubelet[4322]: E0517 01:48:23.944122 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.944134 kubelet[4322]: W0517 01:48:23.944131 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.944178 kubelet[4322]: E0517 01:48:23.944142 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.944363 kubelet[4322]: E0517 01:48:23.944351 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.944363 kubelet[4322]: W0517 01:48:23.944360 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.944404 kubelet[4322]: E0517 01:48:23.944370 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.944404 kubelet[4322]: I0517 01:48:23.944392 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d016dc5a-5728-4bc3-95ba-213735e255c5-varrun\") pod \"csi-node-driver-hcb6n\" (UID: \"d016dc5a-5728-4bc3-95ba-213735e255c5\") " pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:23.944628 kubelet[4322]: E0517 01:48:23.944612 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.944653 kubelet[4322]: W0517 01:48:23.944627 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.944653 kubelet[4322]: E0517 01:48:23.944644 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.944866 kubelet[4322]: E0517 01:48:23.944855 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.944866 kubelet[4322]: W0517 01:48:23.944863 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.944907 kubelet[4322]: E0517 01:48:23.944876 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.945090 kubelet[4322]: E0517 01:48:23.945083 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.945117 kubelet[4322]: W0517 01:48:23.945090 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.945117 kubelet[4322]: E0517 01:48:23.945100 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.945156 kubelet[4322]: I0517 01:48:23.945118 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d016dc5a-5728-4bc3-95ba-213735e255c5-registration-dir\") pod \"csi-node-driver-hcb6n\" (UID: \"d016dc5a-5728-4bc3-95ba-213735e255c5\") " pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:23.945339 kubelet[4322]: E0517 01:48:23.945326 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.945339 kubelet[4322]: W0517 01:48:23.945336 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.945383 kubelet[4322]: E0517 01:48:23.945347 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.945383 kubelet[4322]: I0517 01:48:23.945360 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d016dc5a-5728-4bc3-95ba-213735e255c5-kubelet-dir\") pod \"csi-node-driver-hcb6n\" (UID: \"d016dc5a-5728-4bc3-95ba-213735e255c5\") " pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:23.945601 kubelet[4322]: E0517 01:48:23.945589 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.945601 kubelet[4322]: W0517 01:48:23.945599 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.945640 kubelet[4322]: E0517 01:48:23.945611 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.945826 kubelet[4322]: E0517 01:48:23.945816 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.945826 kubelet[4322]: W0517 01:48:23.945823 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.945869 kubelet[4322]: E0517 01:48:23.945841 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.946095 kubelet[4322]: E0517 01:48:23.946084 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.946095 kubelet[4322]: W0517 01:48:23.946092 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.946143 kubelet[4322]: E0517 01:48:23.946108 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.946278 kubelet[4322]: E0517 01:48:23.946267 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.946278 kubelet[4322]: W0517 01:48:23.946275 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.946323 kubelet[4322]: E0517 01:48:23.946286 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.946478 kubelet[4322]: E0517 01:48:23.946468 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.946478 kubelet[4322]: W0517 01:48:23.946475 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.946522 kubelet[4322]: E0517 01:48:23.946483 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:23.946689 kubelet[4322]: E0517 01:48:23.946679 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:23.946689 kubelet[4322]: W0517 01:48:23.946687 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:23.946733 kubelet[4322]: E0517 01:48:23.946694 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.046284 kubelet[4322]: E0517 01:48:24.046261 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.046284 kubelet[4322]: W0517 01:48:24.046277 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.046370 kubelet[4322]: E0517 01:48:24.046294 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.046487 kubelet[4322]: E0517 01:48:24.046471 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.046487 kubelet[4322]: W0517 01:48:24.046480 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.046557 kubelet[4322]: E0517 01:48:24.046492 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.046718 kubelet[4322]: E0517 01:48:24.046706 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.046718 kubelet[4322]: W0517 01:48:24.046715 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.046764 kubelet[4322]: E0517 01:48:24.046726 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.046947 kubelet[4322]: E0517 01:48:24.046935 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.046947 kubelet[4322]: W0517 01:48:24.046945 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.046995 kubelet[4322]: E0517 01:48:24.046956 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.047234 kubelet[4322]: E0517 01:48:24.047222 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.047234 kubelet[4322]: W0517 01:48:24.047231 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.047281 kubelet[4322]: E0517 01:48:24.047243 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.047443 kubelet[4322]: E0517 01:48:24.047430 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.047443 kubelet[4322]: W0517 01:48:24.047440 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.047493 kubelet[4322]: E0517 01:48:24.047452 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.047728 kubelet[4322]: E0517 01:48:24.047716 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.047728 kubelet[4322]: W0517 01:48:24.047725 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.047770 kubelet[4322]: E0517 01:48:24.047746 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.047927 kubelet[4322]: E0517 01:48:24.047916 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.047927 kubelet[4322]: W0517 01:48:24.047924 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.047973 kubelet[4322]: E0517 01:48:24.047943 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.048086 kubelet[4322]: E0517 01:48:24.048077 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.048107 kubelet[4322]: W0517 01:48:24.048086 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.048133 kubelet[4322]: E0517 01:48:24.048104 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.048240 kubelet[4322]: E0517 01:48:24.048230 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.048240 kubelet[4322]: W0517 01:48:24.048237 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.048286 kubelet[4322]: E0517 01:48:24.048255 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.048448 kubelet[4322]: E0517 01:48:24.048438 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.048448 kubelet[4322]: W0517 01:48:24.048446 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.048490 kubelet[4322]: E0517 01:48:24.048463 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.048653 kubelet[4322]: E0517 01:48:24.048642 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.048653 kubelet[4322]: W0517 01:48:24.048650 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.048740 kubelet[4322]: E0517 01:48:24.048661 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.048887 kubelet[4322]: E0517 01:48:24.048872 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.048887 kubelet[4322]: W0517 01:48:24.048884 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.048928 kubelet[4322]: E0517 01:48:24.048899 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.049046 kubelet[4322]: E0517 01:48:24.049036 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.049046 kubelet[4322]: W0517 01:48:24.049044 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.049104 kubelet[4322]: E0517 01:48:24.049053 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.049238 kubelet[4322]: E0517 01:48:24.049228 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.049266 kubelet[4322]: W0517 01:48:24.049238 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.049266 kubelet[4322]: E0517 01:48:24.049250 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.049475 kubelet[4322]: E0517 01:48:24.049465 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.049501 kubelet[4322]: W0517 01:48:24.049476 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.049501 kubelet[4322]: E0517 01:48:24.049491 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.049672 kubelet[4322]: E0517 01:48:24.049664 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.049692 kubelet[4322]: W0517 01:48:24.049672 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.049692 kubelet[4322]: E0517 01:48:24.049687 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.049850 kubelet[4322]: E0517 01:48:24.049843 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.049874 kubelet[4322]: W0517 01:48:24.049850 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.049874 kubelet[4322]: E0517 01:48:24.049870 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.050092 kubelet[4322]: E0517 01:48:24.050081 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.050092 kubelet[4322]: W0517 01:48:24.050089 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.050131 kubelet[4322]: E0517 01:48:24.050108 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.050355 kubelet[4322]: E0517 01:48:24.050344 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.050355 kubelet[4322]: W0517 01:48:24.050352 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.050405 kubelet[4322]: E0517 01:48:24.050363 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.050536 kubelet[4322]: E0517 01:48:24.050525 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.050536 kubelet[4322]: W0517 01:48:24.050533 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.050574 kubelet[4322]: E0517 01:48:24.050544 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.050796 kubelet[4322]: E0517 01:48:24.050788 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.050821 kubelet[4322]: W0517 01:48:24.050795 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.050821 kubelet[4322]: E0517 01:48:24.050805 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.050989 kubelet[4322]: E0517 01:48:24.050981 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.051009 kubelet[4322]: W0517 01:48:24.050989 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.051009 kubelet[4322]: E0517 01:48:24.050998 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.051195 kubelet[4322]: E0517 01:48:24.051187 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.051221 kubelet[4322]: W0517 01:48:24.051194 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.051221 kubelet[4322]: E0517 01:48:24.051204 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.051424 kubelet[4322]: E0517 01:48:24.051416 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.051450 kubelet[4322]: W0517 01:48:24.051424 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.051450 kubelet[4322]: E0517 01:48:24.051432 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.058948 kubelet[4322]: E0517 01:48:24.058936 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.058969 kubelet[4322]: W0517 01:48:24.058948 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.058969 kubelet[4322]: E0517 01:48:24.058961 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.277843 containerd[2793]: time="2025-05-17T01:48:24.277809583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:24.277945 containerd[2793]: time="2025-05-17T01:48:24.277837503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 17 01:48:24.278543 containerd[2793]: time="2025-05-17T01:48:24.278521185Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:24.280247 containerd[2793]: time="2025-05-17T01:48:24.280228230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:24.280911 containerd[2793]: time="2025-05-17T01:48:24.280889192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 640.693749ms" May 17 01:48:24.280937 containerd[2793]: time="2025-05-17T01:48:24.280919152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 01:48:24.281712 containerd[2793]: time="2025-05-17T01:48:24.281692554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 01:48:24.286168 containerd[2793]: time="2025-05-17T01:48:24.286143527Z" level=info msg="CreateContainer within sandbox \"95ea527890bec20f68f66b2f1dd8740ca9135c0dbfefe37167769fa6136dbee0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 01:48:24.290665 containerd[2793]: time="2025-05-17T01:48:24.290635579Z" level=info msg="CreateContainer within sandbox \"95ea527890bec20f68f66b2f1dd8740ca9135c0dbfefe37167769fa6136dbee0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"189ea2475af4716742b64746f06efdbeee1bae41e0e14e182a318cb9be5ffd13\"" May 17 01:48:24.290994 containerd[2793]: time="2025-05-17T01:48:24.290968940Z" level=info msg="StartContainer for \"189ea2475af4716742b64746f06efdbeee1bae41e0e14e182a318cb9be5ffd13\"" May 17 01:48:24.346815 containerd[2793]: time="2025-05-17T01:48:24.346730657Z" level=info msg="StartContainer for \"189ea2475af4716742b64746f06efdbeee1bae41e0e14e182a318cb9be5ffd13\" returns successfully" May 17 01:48:24.475777 kubelet[4322]: I0517 01:48:24.475732 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7479d9dc6-6xnqs" podStartSLOduration=0.834110548 podStartE2EDuration="1.4757149s" podCreationTimestamp="2025-05-17 01:48:23 +0000 UTC" firstStartedPulling="2025-05-17 01:48:23.639952722 +0000 UTC m=+21.278394778" lastFinishedPulling="2025-05-17 01:48:24.281557074 +0000 UTC m=+21.919999130" observedRunningTime="2025-05-17 01:48:24.475399459 +0000 UTC m=+22.113841515" watchObservedRunningTime="2025-05-17 01:48:24.4757149 +0000 UTC m=+22.114156956" May 17 01:48:24.547718 kubelet[4322]: E0517 01:48:24.547694 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.547718 kubelet[4322]: W0517 01:48:24.547713 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.547808 kubelet[4322]: E0517 01:48:24.547731 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.547953 kubelet[4322]: E0517 01:48:24.547941 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.547953 kubelet[4322]: W0517 01:48:24.547949 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.547997 kubelet[4322]: E0517 01:48:24.547957 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.548146 kubelet[4322]: E0517 01:48:24.548138 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.548171 kubelet[4322]: W0517 01:48:24.548145 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.548171 kubelet[4322]: E0517 01:48:24.548153 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.548323 kubelet[4322]: E0517 01:48:24.548315 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.548344 kubelet[4322]: W0517 01:48:24.548326 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.548344 kubelet[4322]: E0517 01:48:24.548334 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.548521 kubelet[4322]: E0517 01:48:24.548510 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.548521 kubelet[4322]: W0517 01:48:24.548518 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.548565 kubelet[4322]: E0517 01:48:24.548526 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.548729 kubelet[4322]: E0517 01:48:24.548722 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.548750 kubelet[4322]: W0517 01:48:24.548729 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.548750 kubelet[4322]: E0517 01:48:24.548736 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.548913 kubelet[4322]: E0517 01:48:24.548902 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.548913 kubelet[4322]: W0517 01:48:24.548910 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.548961 kubelet[4322]: E0517 01:48:24.548918 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.549124 kubelet[4322]: E0517 01:48:24.549113 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.549124 kubelet[4322]: W0517 01:48:24.549120 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.549172 kubelet[4322]: E0517 01:48:24.549128 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.549282 kubelet[4322]: E0517 01:48:24.549274 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.549304 kubelet[4322]: W0517 01:48:24.549282 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.549304 kubelet[4322]: E0517 01:48:24.549289 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.549458 kubelet[4322]: E0517 01:48:24.549451 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.549479 kubelet[4322]: W0517 01:48:24.549459 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.549479 kubelet[4322]: E0517 01:48:24.549466 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.549661 kubelet[4322]: E0517 01:48:24.549654 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.549682 kubelet[4322]: W0517 01:48:24.549661 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.549682 kubelet[4322]: E0517 01:48:24.549668 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.549883 kubelet[4322]: E0517 01:48:24.549873 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.549883 kubelet[4322]: W0517 01:48:24.549881 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.549923 kubelet[4322]: E0517 01:48:24.549888 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.550154 kubelet[4322]: E0517 01:48:24.550144 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.550154 kubelet[4322]: W0517 01:48:24.550152 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.550203 kubelet[4322]: E0517 01:48:24.550159 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.550365 kubelet[4322]: E0517 01:48:24.550354 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.550365 kubelet[4322]: W0517 01:48:24.550362 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.550407 kubelet[4322]: E0517 01:48:24.550369 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.550580 kubelet[4322]: E0517 01:48:24.550559 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.550580 kubelet[4322]: W0517 01:48:24.550568 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.550580 kubelet[4322]: E0517 01:48:24.550575 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.550842 kubelet[4322]: E0517 01:48:24.550833 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.550842 kubelet[4322]: W0517 01:48:24.550841 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.550892 kubelet[4322]: E0517 01:48:24.550850 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.551063 kubelet[4322]: E0517 01:48:24.551055 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.551063 kubelet[4322]: W0517 01:48:24.551063 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.551129 kubelet[4322]: E0517 01:48:24.551078 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.551348 kubelet[4322]: E0517 01:48:24.551339 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.551377 kubelet[4322]: W0517 01:48:24.551348 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.551377 kubelet[4322]: E0517 01:48:24.551358 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.551603 kubelet[4322]: E0517 01:48:24.551594 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.551632 kubelet[4322]: W0517 01:48:24.551602 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.551632 kubelet[4322]: E0517 01:48:24.551613 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.551824 kubelet[4322]: E0517 01:48:24.551816 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.551824 kubelet[4322]: W0517 01:48:24.551824 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.551873 kubelet[4322]: E0517 01:48:24.551834 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.552042 kubelet[4322]: E0517 01:48:24.552034 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.552066 kubelet[4322]: W0517 01:48:24.552041 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.552066 kubelet[4322]: E0517 01:48:24.552052 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.552226 kubelet[4322]: E0517 01:48:24.552218 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.552249 kubelet[4322]: W0517 01:48:24.552226 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.552270 kubelet[4322]: E0517 01:48:24.552253 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.552438 kubelet[4322]: E0517 01:48:24.552430 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.552459 kubelet[4322]: W0517 01:48:24.552437 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.552484 kubelet[4322]: E0517 01:48:24.552460 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.552612 kubelet[4322]: E0517 01:48:24.552604 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.552633 kubelet[4322]: W0517 01:48:24.552612 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.552633 kubelet[4322]: E0517 01:48:24.552622 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.552775 kubelet[4322]: E0517 01:48:24.552768 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.552796 kubelet[4322]: W0517 01:48:24.552775 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.552796 kubelet[4322]: E0517 01:48:24.552785 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.552920 kubelet[4322]: E0517 01:48:24.552913 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.552942 kubelet[4322]: W0517 01:48:24.552921 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.552942 kubelet[4322]: E0517 01:48:24.552931 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.553145 kubelet[4322]: E0517 01:48:24.553128 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.553171 kubelet[4322]: W0517 01:48:24.553143 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.553171 kubelet[4322]: E0517 01:48:24.553160 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.553375 kubelet[4322]: E0517 01:48:24.553364 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.553375 kubelet[4322]: W0517 01:48:24.553372 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.553417 kubelet[4322]: E0517 01:48:24.553386 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.553539 kubelet[4322]: E0517 01:48:24.553529 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.553539 kubelet[4322]: W0517 01:48:24.553537 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.553579 kubelet[4322]: E0517 01:48:24.553547 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.553777 kubelet[4322]: E0517 01:48:24.553766 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.553777 kubelet[4322]: W0517 01:48:24.553774 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.553819 kubelet[4322]: E0517 01:48:24.553785 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.553998 kubelet[4322]: E0517 01:48:24.553989 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.554019 kubelet[4322]: W0517 01:48:24.553998 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.554019 kubelet[4322]: E0517 01:48:24.554009 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.554227 kubelet[4322]: E0517 01:48:24.554215 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.554227 kubelet[4322]: W0517 01:48:24.554225 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.554266 kubelet[4322]: E0517 01:48:24.554236 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.554486 kubelet[4322]: E0517 01:48:24.554476 4322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:48:24.554507 kubelet[4322]: W0517 01:48:24.554487 4322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:48:24.554507 kubelet[4322]: E0517 01:48:24.554497 4322 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:48:24.567368 containerd[2793]: time="2025-05-17T01:48:24.567339997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:24.567425 containerd[2793]: time="2025-05-17T01:48:24.567398878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 17 01:48:24.568471 containerd[2793]: time="2025-05-17T01:48:24.568440040Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:24.571183 containerd[2793]: time="2025-05-17T01:48:24.571152888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:24.571833 containerd[2793]: time="2025-05-17T01:48:24.571800530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 290.076856ms" May 17 01:48:24.571873 containerd[2793]: time="2025-05-17T01:48:24.571839370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 01:48:24.574751 containerd[2793]: time="2025-05-17T01:48:24.574724538Z" level=info msg="CreateContainer within sandbox \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 01:48:24.579262 containerd[2793]: time="2025-05-17T01:48:24.579237151Z" level=info msg="CreateContainer within sandbox \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aae5dd59440e860b050a84a5962f40d186393a532312aa6ee34439be2a279e6c\"" May 17 01:48:24.579585 containerd[2793]: time="2025-05-17T01:48:24.579563312Z" level=info msg="StartContainer for \"aae5dd59440e860b050a84a5962f40d186393a532312aa6ee34439be2a279e6c\"" May 17 01:48:24.626101 containerd[2793]: time="2025-05-17T01:48:24.626012002Z" level=info msg="StartContainer for \"aae5dd59440e860b050a84a5962f40d186393a532312aa6ee34439be2a279e6c\" returns successfully" May 17 01:48:24.815365 containerd[2793]: time="2025-05-17T01:48:24.815317975Z" level=info msg="shim disconnected" id=aae5dd59440e860b050a84a5962f40d186393a532312aa6ee34439be2a279e6c namespace=k8s.io May 17 01:48:24.815365 containerd[2793]: time="2025-05-17T01:48:24.815359095Z" level=warning msg="cleaning up after shim disconnected" id=aae5dd59440e860b050a84a5962f40d186393a532312aa6ee34439be2a279e6c namespace=k8s.io May 17 01:48:24.815365 containerd[2793]: time="2025-05-17T01:48:24.815367455Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 01:48:25.431104 kubelet[4322]: E0517 01:48:25.431048 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcb6n" podUID="d016dc5a-5728-4bc3-95ba-213735e255c5" May 17 01:48:25.471490 containerd[2793]: time="2025-05-17T01:48:25.471439297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 01:48:25.544816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aae5dd59440e860b050a84a5962f40d186393a532312aa6ee34439be2a279e6c-rootfs.mount: Deactivated successfully. May 17 01:48:26.335939 containerd[2793]: time="2025-05-17T01:48:26.335865121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:26.336305 containerd[2793]: time="2025-05-17T01:48:26.335939681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 17 01:48:26.336625 containerd[2793]: time="2025-05-17T01:48:26.336596403Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:26.338583 containerd[2793]: time="2025-05-17T01:48:26.338558127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:26.339351 containerd[2793]: time="2025-05-17T01:48:26.339325009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 867.850992ms" May 17 01:48:26.339519 containerd[2793]: time="2025-05-17T01:48:26.339437970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 01:48:26.341094 containerd[2793]: time="2025-05-17T01:48:26.341048774Z" level=info msg="CreateContainer within sandbox \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 01:48:26.348617 containerd[2793]: time="2025-05-17T01:48:26.348583592Z" level=info msg="CreateContainer within sandbox \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f97e6e824271d5d28829a97c4fa4ef72a061d59d8f926c06110205dce1b25c2\"" May 17 01:48:26.349075 containerd[2793]: time="2025-05-17T01:48:26.349047113Z" level=info msg="StartContainer for \"8f97e6e824271d5d28829a97c4fa4ef72a061d59d8f926c06110205dce1b25c2\"" May 17 01:48:26.396552 containerd[2793]: time="2025-05-17T01:48:26.396485871Z" level=info msg="StartContainer for \"8f97e6e824271d5d28829a97c4fa4ef72a061d59d8f926c06110205dce1b25c2\" returns successfully" May 17 01:48:26.769357 containerd[2793]: time="2025-05-17T01:48:26.769322592Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:48:26.784913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f97e6e824271d5d28829a97c4fa4ef72a061d59d8f926c06110205dce1b25c2-rootfs.mount: Deactivated successfully. May 17 01:48:26.792635 kubelet[4322]: I0517 01:48:26.792520 4322 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 01:48:26.899856 containerd[2793]: time="2025-05-17T01:48:26.899797395Z" level=info msg="shim disconnected" id=8f97e6e824271d5d28829a97c4fa4ef72a061d59d8f926c06110205dce1b25c2 namespace=k8s.io May 17 01:48:26.899856 containerd[2793]: time="2025-05-17T01:48:26.899849675Z" level=warning msg="cleaning up after shim disconnected" id=8f97e6e824271d5d28829a97c4fa4ef72a061d59d8f926c06110205dce1b25c2 namespace=k8s.io May 17 01:48:26.899856 containerd[2793]: time="2025-05-17T01:48:26.899857795Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 01:48:26.966394 kubelet[4322]: I0517 01:48:26.966352 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvsw6\" (UniqueName: \"kubernetes.io/projected/0086f012-1f79-4125-a28a-ca399f86c285-kube-api-access-hvsw6\") pod \"calico-apiserver-95d6b45b8-r5pmv\" (UID: \"0086f012-1f79-4125-a28a-ca399f86c285\") " pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" May 17 01:48:26.966480 kubelet[4322]: I0517 01:48:26.966405 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-ca-bundle\") pod \"whisker-7558ffd48f-k4ql5\" (UID: \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\") " pod="calico-system/whisker-7558ffd48f-k4ql5" May 17 01:48:26.966480 kubelet[4322]: I0517 01:48:26.966438 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e771814-a5f9-4a19-8f90-467aed0cea74-config-volume\") pod \"coredns-7c65d6cfc9-9kg2c\" (UID: \"2e771814-a5f9-4a19-8f90-467aed0cea74\") " pod="kube-system/coredns-7c65d6cfc9-9kg2c" May 17 01:48:26.966480 kubelet[4322]: I0517 01:48:26.966463 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7vh\" (UniqueName: \"kubernetes.io/projected/2e771814-a5f9-4a19-8f90-467aed0cea74-kube-api-access-hh7vh\") pod \"coredns-7c65d6cfc9-9kg2c\" (UID: \"2e771814-a5f9-4a19-8f90-467aed0cea74\") " pod="kube-system/coredns-7c65d6cfc9-9kg2c" May 17 01:48:26.966547 kubelet[4322]: I0517 01:48:26.966481 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59dg\" (UniqueName: \"kubernetes.io/projected/a5e46d9f-129d-4d2e-be7f-85655ce91f55-kube-api-access-f59dg\") pod \"calico-apiserver-95d6b45b8-fsg7s\" (UID: \"a5e46d9f-129d-4d2e-be7f-85655ce91f55\") " pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" May 17 01:48:26.966547 kubelet[4322]: I0517 01:48:26.966499 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4m2c\" (UniqueName: \"kubernetes.io/projected/167f036e-0c64-4fc2-a584-1f781d3f336f-kube-api-access-x4m2c\") pod \"goldmane-8f77d7b6c-dqq55\" (UID: \"167f036e-0c64-4fc2-a584-1f781d3f336f\") " pod="calico-system/goldmane-8f77d7b6c-dqq55" May 17 01:48:26.966547 kubelet[4322]: I0517 01:48:26.966517 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b2192c3-fd97-4d0f-b686-fbf27564a7ef-tigera-ca-bundle\") pod \"calico-kube-controllers-78fc858bc7-skfq5\" (UID: \"3b2192c3-fd97-4d0f-b686-fbf27564a7ef\") " pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" May 17 01:48:26.966547 kubelet[4322]: I0517 01:48:26.966534 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8mg\" (UniqueName: \"kubernetes.io/projected/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-kube-api-access-vq8mg\") pod \"whisker-7558ffd48f-k4ql5\" (UID: \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\") " pod="calico-system/whisker-7558ffd48f-k4ql5" May 17 01:48:26.966639 kubelet[4322]: I0517 01:48:26.966553 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17a15b4f-bd7a-48c1-8f89-54ee79c68bdb-config-volume\") pod \"coredns-7c65d6cfc9-lvq6d\" (UID: \"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb\") " pod="kube-system/coredns-7c65d6cfc9-lvq6d" May 17 01:48:26.966639 kubelet[4322]: I0517 01:48:26.966572 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5e46d9f-129d-4d2e-be7f-85655ce91f55-calico-apiserver-certs\") pod \"calico-apiserver-95d6b45b8-fsg7s\" (UID: \"a5e46d9f-129d-4d2e-be7f-85655ce91f55\") " pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" May 17 01:48:26.966639 kubelet[4322]: I0517 01:48:26.966587 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmcl\" (UniqueName: \"kubernetes.io/projected/3b2192c3-fd97-4d0f-b686-fbf27564a7ef-kube-api-access-jrmcl\") pod \"calico-kube-controllers-78fc858bc7-skfq5\" (UID: \"3b2192c3-fd97-4d0f-b686-fbf27564a7ef\") " pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" May 17 01:48:26.966639 kubelet[4322]: I0517 01:48:26.966607 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/167f036e-0c64-4fc2-a584-1f781d3f336f-config\") pod \"goldmane-8f77d7b6c-dqq55\" (UID: \"167f036e-0c64-4fc2-a584-1f781d3f336f\") " pod="calico-system/goldmane-8f77d7b6c-dqq55" May 17 01:48:26.966639 kubelet[4322]: I0517 01:48:26.966621 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/167f036e-0c64-4fc2-a584-1f781d3f336f-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-dqq55\" (UID: \"167f036e-0c64-4fc2-a584-1f781d3f336f\") " pod="calico-system/goldmane-8f77d7b6c-dqq55" May 17 01:48:26.966745 kubelet[4322]: I0517 01:48:26.966662 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/167f036e-0c64-4fc2-a584-1f781d3f336f-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-dqq55\" (UID: \"167f036e-0c64-4fc2-a584-1f781d3f336f\") " pod="calico-system/goldmane-8f77d7b6c-dqq55" May 17 01:48:26.966745 kubelet[4322]: I0517 01:48:26.966698 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-backend-key-pair\") pod \"whisker-7558ffd48f-k4ql5\" (UID: \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\") " pod="calico-system/whisker-7558ffd48f-k4ql5" May 17 01:48:26.966745 kubelet[4322]: I0517 01:48:26.966717 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn54\" (UniqueName: \"kubernetes.io/projected/17a15b4f-bd7a-48c1-8f89-54ee79c68bdb-kube-api-access-4xn54\") pod \"coredns-7c65d6cfc9-lvq6d\" (UID: \"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb\") " pod="kube-system/coredns-7c65d6cfc9-lvq6d" May 17 01:48:26.966745 kubelet[4322]: I0517 01:48:26.966735 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0086f012-1f79-4125-a28a-ca399f86c285-calico-apiserver-certs\") pod \"calico-apiserver-95d6b45b8-r5pmv\" (UID: \"0086f012-1f79-4125-a28a-ca399f86c285\") " pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" May 17 01:48:27.108422 containerd[2793]: time="2025-05-17T01:48:27.108356773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvq6d,Uid:17a15b4f-bd7a-48c1-8f89-54ee79c68bdb,Namespace:kube-system,Attempt:0,}" May 17 01:48:27.108542 containerd[2793]: time="2025-05-17T01:48:27.108517574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-fsg7s,Uid:a5e46d9f-129d-4d2e-be7f-85655ce91f55,Namespace:calico-apiserver,Attempt:0,}" May 17 01:48:27.109942 containerd[2793]: time="2025-05-17T01:48:27.109914777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9kg2c,Uid:2e771814-a5f9-4a19-8f90-467aed0cea74,Namespace:kube-system,Attempt:0,}" May 17 01:48:27.111338 containerd[2793]: time="2025-05-17T01:48:27.111312380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-dqq55,Uid:167f036e-0c64-4fc2-a584-1f781d3f336f,Namespace:calico-system,Attempt:0,}" May 17 01:48:27.111421 containerd[2793]: time="2025-05-17T01:48:27.111398060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc858bc7-skfq5,Uid:3b2192c3-fd97-4d0f-b686-fbf27564a7ef,Namespace:calico-system,Attempt:0,}" May 17 01:48:27.112812 containerd[2793]: time="2025-05-17T01:48:27.112791344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7558ffd48f-k4ql5,Uid:e8ef067f-5f3a-4825-bca4-4f1da73bff0a,Namespace:calico-system,Attempt:0,}" May 17 01:48:27.113083 containerd[2793]: time="2025-05-17T01:48:27.113054504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-r5pmv,Uid:0086f012-1f79-4125-a28a-ca399f86c285,Namespace:calico-apiserver,Attempt:0,}" May 17 01:48:27.165794 containerd[2793]: time="2025-05-17T01:48:27.165731546Z" level=error msg="Failed to destroy network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.166164 containerd[2793]: time="2025-05-17T01:48:27.166135987Z" level=error msg="encountered an error cleaning up failed sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.166205 containerd[2793]: time="2025-05-17T01:48:27.166186107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9kg2c,Uid:2e771814-a5f9-4a19-8f90-467aed0cea74,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.166428 kubelet[4322]: E0517 01:48:27.166387 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.166480 kubelet[4322]: E0517 01:48:27.166467 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9kg2c" May 17 01:48:27.166506 kubelet[4322]: E0517 01:48:27.166487 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9kg2c" May 17 01:48:27.166553 kubelet[4322]: E0517 01:48:27.166528 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9kg2c_kube-system(2e771814-a5f9-4a19-8f90-467aed0cea74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9kg2c_kube-system(2e771814-a5f9-4a19-8f90-467aed0cea74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9kg2c" podUID="2e771814-a5f9-4a19-8f90-467aed0cea74" May 17 01:48:27.166878 containerd[2793]: time="2025-05-17T01:48:27.166849269Z" level=error msg="Failed to destroy network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167100 containerd[2793]: time="2025-05-17T01:48:27.167056109Z" level=error msg="Failed to destroy network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167157 containerd[2793]: time="2025-05-17T01:48:27.167063109Z" level=error msg="Failed to destroy network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167231 containerd[2793]: time="2025-05-17T01:48:27.167210110Z" level=error msg="encountered an error cleaning up failed sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167272 containerd[2793]: time="2025-05-17T01:48:27.167253670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-r5pmv,Uid:0086f012-1f79-4125-a28a-ca399f86c285,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167313 containerd[2793]: time="2025-05-17T01:48:27.167268470Z" level=error msg="Failed to destroy network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167391 kubelet[4322]: E0517 01:48:27.167368 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167422 kubelet[4322]: E0517 01:48:27.167409 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" May 17 01:48:27.167450 kubelet[4322]: E0517 01:48:27.167428 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" May 17 01:48:27.167474 containerd[2793]: time="2025-05-17T01:48:27.167448310Z" level=error msg="encountered an error cleaning up failed sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167498 kubelet[4322]: E0517 01:48:27.167463 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95d6b45b8-r5pmv_calico-apiserver(0086f012-1f79-4125-a28a-ca399f86c285)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95d6b45b8-r5pmv_calico-apiserver(0086f012-1f79-4125-a28a-ca399f86c285)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" podUID="0086f012-1f79-4125-a28a-ca399f86c285" May 17 01:48:27.167538 containerd[2793]: time="2025-05-17T01:48:27.167492270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-dqq55,Uid:167f036e-0c64-4fc2-a584-1f781d3f336f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167538 containerd[2793]: time="2025-05-17T01:48:27.167470310Z" level=error msg="encountered an error cleaning up failed sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167594 containerd[2793]: time="2025-05-17T01:48:27.167563711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvq6d,Uid:17a15b4f-bd7a-48c1-8f89-54ee79c68bdb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167616 containerd[2793]: time="2025-05-17T01:48:27.167587151Z" level=error msg="encountered an error cleaning up failed sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167638 kubelet[4322]: E0517 01:48:27.167601 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167638 kubelet[4322]: E0517 01:48:27.167625 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-dqq55" May 17 01:48:27.167685 kubelet[4322]: E0517 01:48:27.167637 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-dqq55" May 17 01:48:27.167685 kubelet[4322]: E0517 01:48:27.167656 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167732 containerd[2793]: time="2025-05-17T01:48:27.167631031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7558ffd48f-k4ql5,Uid:e8ef067f-5f3a-4825-bca4-4f1da73bff0a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167732 containerd[2793]: time="2025-05-17T01:48:27.167503270Z" level=error msg="Failed to destroy network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167788 kubelet[4322]: E0517 01:48:27.167696 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lvq6d" May 17 01:48:27.167788 kubelet[4322]: E0517 01:48:27.167713 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lvq6d" May 17 01:48:27.167788 kubelet[4322]: E0517 01:48:27.167722 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.167855 kubelet[4322]: E0517 01:48:27.167661 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:48:27.167855 kubelet[4322]: E0517 01:48:27.167740 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lvq6d_kube-system(17a15b4f-bd7a-48c1-8f89-54ee79c68bdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lvq6d_kube-system(17a15b4f-bd7a-48c1-8f89-54ee79c68bdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lvq6d" podUID="17a15b4f-bd7a-48c1-8f89-54ee79c68bdb" May 17 01:48:27.167855 kubelet[4322]: E0517 01:48:27.167752 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7558ffd48f-k4ql5" May 17 01:48:27.168010 kubelet[4322]: E0517 01:48:27.167788 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7558ffd48f-k4ql5" May 17 01:48:27.168010 kubelet[4322]: E0517 01:48:27.167832 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7558ffd48f-k4ql5_calico-system(e8ef067f-5f3a-4825-bca4-4f1da73bff0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7558ffd48f-k4ql5_calico-system(e8ef067f-5f3a-4825-bca4-4f1da73bff0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7558ffd48f-k4ql5" podUID="e8ef067f-5f3a-4825-bca4-4f1da73bff0a" May 17 01:48:27.168090 containerd[2793]: time="2025-05-17T01:48:27.167957711Z" level=error msg="encountered an error cleaning up failed sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.168090 containerd[2793]: time="2025-05-17T01:48:27.167997712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-fsg7s,Uid:a5e46d9f-129d-4d2e-be7f-85655ce91f55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.168156 kubelet[4322]: E0517 01:48:27.168124 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.168182 kubelet[4322]: E0517 01:48:27.168170 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" May 17 01:48:27.168206 kubelet[4322]: E0517 01:48:27.168187 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" May 17 01:48:27.168243 kubelet[4322]: E0517 01:48:27.168222 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95d6b45b8-fsg7s_calico-apiserver(a5e46d9f-129d-4d2e-be7f-85655ce91f55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95d6b45b8-fsg7s_calico-apiserver(a5e46d9f-129d-4d2e-be7f-85655ce91f55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" podUID="a5e46d9f-129d-4d2e-be7f-85655ce91f55" May 17 01:48:27.168583 containerd[2793]: time="2025-05-17T01:48:27.168556193Z" level=error msg="Failed to destroy network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.168884 containerd[2793]: time="2025-05-17T01:48:27.168852274Z" level=error msg="encountered an error cleaning up failed sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.168923 containerd[2793]: time="2025-05-17T01:48:27.168898674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc858bc7-skfq5,Uid:3b2192c3-fd97-4d0f-b686-fbf27564a7ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.169046 kubelet[4322]: E0517 01:48:27.169027 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.169076 kubelet[4322]: E0517 01:48:27.169059 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" May 17 01:48:27.169106 kubelet[4322]: E0517 01:48:27.169078 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" May 17 01:48:27.169130 kubelet[4322]: E0517 01:48:27.169106 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78fc858bc7-skfq5_calico-system(3b2192c3-fd97-4d0f-b686-fbf27564a7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78fc858bc7-skfq5_calico-system(3b2192c3-fd97-4d0f-b686-fbf27564a7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" podUID="3b2192c3-fd97-4d0f-b686-fbf27564a7ef" May 17 01:48:27.433265 containerd[2793]: time="2025-05-17T01:48:27.433202566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcb6n,Uid:d016dc5a-5728-4bc3-95ba-213735e255c5,Namespace:calico-system,Attempt:0,}" May 17 01:48:27.475257 kubelet[4322]: I0517 01:48:27.475231 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:48:27.475713 containerd[2793]: time="2025-05-17T01:48:27.475673504Z" level=error msg="Failed to destroy network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.475764 containerd[2793]: time="2025-05-17T01:48:27.475740705Z" level=info msg="StopPodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\"" May 17 01:48:27.475914 containerd[2793]: time="2025-05-17T01:48:27.475896905Z" level=info msg="Ensure that sandbox 3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c in task-service has been cleanup successfully" May 17 01:48:27.476033 containerd[2793]: time="2025-05-17T01:48:27.476008345Z" level=error msg="encountered an error cleaning up failed sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.476082 containerd[2793]: time="2025-05-17T01:48:27.476051545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcb6n,Uid:d016dc5a-5728-4bc3-95ba-213735e255c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.476225 kubelet[4322]: E0517 01:48:27.476202 4322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.476283 kubelet[4322]: E0517 01:48:27.476241 4322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:27.476312 kubelet[4322]: E0517 01:48:27.476287 4322 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hcb6n" May 17 01:48:27.476343 kubelet[4322]: E0517 01:48:27.476322 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hcb6n_calico-system(d016dc5a-5728-4bc3-95ba-213735e255c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hcb6n_calico-system(d016dc5a-5728-4bc3-95ba-213735e255c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hcb6n" podUID="d016dc5a-5728-4bc3-95ba-213735e255c5" May 17 01:48:27.477258 kubelet[4322]: I0517 01:48:27.477238 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:48:27.477304 containerd[2793]: time="2025-05-17T01:48:27.477244308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 01:48:27.477605 containerd[2793]: time="2025-05-17T01:48:27.477581229Z" level=info msg="StopPodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\"" May 17 01:48:27.477733 containerd[2793]: time="2025-05-17T01:48:27.477716909Z" level=info msg="Ensure that sandbox 235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565 in task-service has been cleanup successfully" May 17 01:48:27.478093 kubelet[4322]: I0517 01:48:27.478078 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:48:27.478468 containerd[2793]: time="2025-05-17T01:48:27.478447911Z" level=info msg="StopPodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\"" May 17 01:48:27.478586 containerd[2793]: time="2025-05-17T01:48:27.478570751Z" level=info msg="Ensure that sandbox 72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933 in task-service has been cleanup successfully" May 17 01:48:27.478841 kubelet[4322]: I0517 01:48:27.478827 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:48:27.479226 containerd[2793]: time="2025-05-17T01:48:27.479199513Z" level=info msg="StopPodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\"" May 17 01:48:27.479377 containerd[2793]: time="2025-05-17T01:48:27.479359673Z" level=info msg="Ensure that sandbox be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8 in task-service has been cleanup successfully" May 17 01:48:27.479883 kubelet[4322]: I0517 01:48:27.479868 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:48:27.480256 containerd[2793]: time="2025-05-17T01:48:27.480232235Z" level=info msg="StopPodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\"" May 17 01:48:27.480394 containerd[2793]: time="2025-05-17T01:48:27.480376675Z" level=info msg="Ensure that sandbox 1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1 in task-service has been cleanup successfully" May 17 01:48:27.480603 kubelet[4322]: I0517 01:48:27.480587 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:48:27.480997 containerd[2793]: time="2025-05-17T01:48:27.480971237Z" level=info msg="StopPodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\"" May 17 01:48:27.481139 containerd[2793]: time="2025-05-17T01:48:27.481123437Z" level=info msg="Ensure that sandbox e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9 in task-service has been cleanup successfully" May 17 01:48:27.481314 kubelet[4322]: I0517 01:48:27.481300 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:48:27.481702 containerd[2793]: time="2025-05-17T01:48:27.481676718Z" level=info msg="StopPodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\"" May 17 01:48:27.481834 containerd[2793]: time="2025-05-17T01:48:27.481819719Z" level=info msg="Ensure that sandbox 9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9 in task-service has been cleanup successfully" May 17 01:48:27.496397 containerd[2793]: time="2025-05-17T01:48:27.496342432Z" level=error msg="StopPodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" failed" error="failed to destroy network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.496542 kubelet[4322]: E0517 01:48:27.496511 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:48:27.496603 kubelet[4322]: E0517 01:48:27.496563 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c"} May 17 01:48:27.496642 kubelet[4322]: E0517 01:48:27.496622 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.496700 kubelet[4322]: E0517 01:48:27.496652 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lvq6d" podUID="17a15b4f-bd7a-48c1-8f89-54ee79c68bdb" May 17 01:48:27.498444 containerd[2793]: time="2025-05-17T01:48:27.498406837Z" level=error msg="StopPodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" failed" error="failed to destroy network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.498586 kubelet[4322]: E0517 01:48:27.498560 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:48:27.498623 kubelet[4322]: E0517 01:48:27.498598 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565"} May 17 01:48:27.498644 kubelet[4322]: E0517 01:48:27.498629 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.498688 kubelet[4322]: E0517 01:48:27.498648 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7558ffd48f-k4ql5" podUID="e8ef067f-5f3a-4825-bca4-4f1da73bff0a" May 17 01:48:27.499763 containerd[2793]: time="2025-05-17T01:48:27.499735640Z" level=error msg="StopPodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" failed" error="failed to destroy network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.499892 kubelet[4322]: E0517 01:48:27.499864 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:48:27.499920 kubelet[4322]: E0517 01:48:27.499901 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933"} May 17 01:48:27.499947 kubelet[4322]: E0517 01:48:27.499937 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"167f036e-0c64-4fc2-a584-1f781d3f336f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.499996 kubelet[4322]: E0517 01:48:27.499958 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"167f036e-0c64-4fc2-a584-1f781d3f336f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:48:27.500062 containerd[2793]: time="2025-05-17T01:48:27.500034761Z" level=error msg="StopPodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" failed" error="failed to destroy network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.500171 kubelet[4322]: E0517 01:48:27.500154 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:48:27.500199 kubelet[4322]: E0517 01:48:27.500173 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8"} May 17 01:48:27.500199 kubelet[4322]: E0517 01:48:27.500193 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e771814-a5f9-4a19-8f90-467aed0cea74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.500253 kubelet[4322]: E0517 01:48:27.500208 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e771814-a5f9-4a19-8f90-467aed0cea74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9kg2c" podUID="2e771814-a5f9-4a19-8f90-467aed0cea74" May 17 01:48:27.500862 containerd[2793]: time="2025-05-17T01:48:27.500833483Z" level=error msg="StopPodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" failed" error="failed to destroy network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.501007 kubelet[4322]: E0517 01:48:27.500959 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:48:27.501035 kubelet[4322]: E0517 01:48:27.501017 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1"} May 17 01:48:27.501056 kubelet[4322]: E0517 01:48:27.501042 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0086f012-1f79-4125-a28a-ca399f86c285\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.501100 kubelet[4322]: E0517 01:48:27.501060 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0086f012-1f79-4125-a28a-ca399f86c285\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" podUID="0086f012-1f79-4125-a28a-ca399f86c285" May 17 01:48:27.501332 containerd[2793]: time="2025-05-17T01:48:27.501298484Z" level=error msg="StopPodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" failed" error="failed to destroy network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.501445 kubelet[4322]: E0517 01:48:27.501426 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:48:27.501490 kubelet[4322]: E0517 01:48:27.501448 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9"} May 17 01:48:27.501490 kubelet[4322]: E0517 01:48:27.501470 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5e46d9f-129d-4d2e-be7f-85655ce91f55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.501566 kubelet[4322]: E0517 01:48:27.501488 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5e46d9f-129d-4d2e-be7f-85655ce91f55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" podUID="a5e46d9f-129d-4d2e-be7f-85655ce91f55" May 17 01:48:27.504396 containerd[2793]: time="2025-05-17T01:48:27.504344091Z" level=error msg="StopPodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" failed" error="failed to destroy network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:27.504515 kubelet[4322]: E0517 01:48:27.504493 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:48:27.504545 kubelet[4322]: E0517 01:48:27.504521 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9"} May 17 01:48:27.504569 kubelet[4322]: E0517 01:48:27.504546 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b2192c3-fd97-4d0f-b686-fbf27564a7ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:27.504569 kubelet[4322]: E0517 01:48:27.504562 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b2192c3-fd97-4d0f-b686-fbf27564a7ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" podUID="3b2192c3-fd97-4d0f-b686-fbf27564a7ef" May 17 01:48:28.483149 kubelet[4322]: I0517 01:48:28.483120 4322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:48:28.483634 containerd[2793]: time="2025-05-17T01:48:28.483604290Z" level=info msg="StopPodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\"" May 17 01:48:28.483822 containerd[2793]: time="2025-05-17T01:48:28.483769610Z" level=info msg="Ensure that sandbox 30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973 in task-service has been cleanup successfully" May 17 01:48:28.503693 containerd[2793]: time="2025-05-17T01:48:28.503653134Z" level=error msg="StopPodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" failed" error="failed to destroy network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:48:28.503844 kubelet[4322]: E0517 01:48:28.503810 4322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:48:28.503883 kubelet[4322]: E0517 01:48:28.503851 4322 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973"} May 17 01:48:28.503907 kubelet[4322]: E0517 01:48:28.503880 4322 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d016dc5a-5728-4bc3-95ba-213735e255c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:48:28.503955 kubelet[4322]: E0517 01:48:28.503904 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d016dc5a-5728-4bc3-95ba-213735e255c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hcb6n" podUID="d016dc5a-5728-4bc3-95ba-213735e255c5" May 17 01:48:29.340234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258062686.mount: Deactivated successfully. May 17 01:48:29.356789 containerd[2793]: time="2025-05-17T01:48:29.356752498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:29.356833 containerd[2793]: time="2025-05-17T01:48:29.356797338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 17 01:48:29.357410 containerd[2793]: time="2025-05-17T01:48:29.357393220Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:29.359062 containerd[2793]: time="2025-05-17T01:48:29.359041063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:29.359663 containerd[2793]: time="2025-05-17T01:48:29.359636904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 1.882365516s" May 17 01:48:29.359697 containerd[2793]: time="2025-05-17T01:48:29.359667984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 01:48:29.365043 containerd[2793]: time="2025-05-17T01:48:29.365013475Z" level=info msg="CreateContainer within sandbox \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 01:48:29.369885 containerd[2793]: time="2025-05-17T01:48:29.369852845Z" level=info msg="CreateContainer within sandbox \"07513021ecf30163835deba4d1c232d1fce0a567e72bc8832d2ed57d200578db\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2029e8ceb7ee796d44b9c710055afe086b58f76e3b71df7c08218a20d5d43af4\"" May 17 01:48:29.370235 containerd[2793]: time="2025-05-17T01:48:29.370208126Z" level=info msg="StartContainer for \"2029e8ceb7ee796d44b9c710055afe086b58f76e3b71df7c08218a20d5d43af4\"" May 17 01:48:29.417212 containerd[2793]: time="2025-05-17T01:48:29.417180301Z" level=info msg="StartContainer for \"2029e8ceb7ee796d44b9c710055afe086b58f76e3b71df7c08218a20d5d43af4\" returns successfully" May 17 01:48:29.561919 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 01:48:29.562032 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 01:48:29.619178 kubelet[4322]: I0517 01:48:29.619088 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p5bj2" podStartSLOduration=1.180653974 podStartE2EDuration="6.619064233s" podCreationTimestamp="2025-05-17 01:48:23 +0000 UTC" firstStartedPulling="2025-05-17 01:48:23.921847287 +0000 UTC m=+21.560289343" lastFinishedPulling="2025-05-17 01:48:29.360257546 +0000 UTC m=+26.998699602" observedRunningTime="2025-05-17 01:48:29.509330929 +0000 UTC m=+27.147772985" watchObservedRunningTime="2025-05-17 01:48:29.619064233 +0000 UTC m=+27.257506289" May 17 01:48:29.620038 containerd[2793]: time="2025-05-17T01:48:29.620006394Z" level=info msg="StopPodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\"" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.658 [INFO][6283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.658 [INFO][6283] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" iface="eth0" netns="/var/run/netns/cni-6fbf7d05-e63e-97ea-9133-d5367d1c5cff" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.658 [INFO][6283] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" iface="eth0" netns="/var/run/netns/cni-6fbf7d05-e63e-97ea-9133-d5367d1c5cff" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.658 [INFO][6283] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" iface="eth0" netns="/var/run/netns/cni-6fbf7d05-e63e-97ea-9133-d5367d1c5cff" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.658 [INFO][6283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.658 [INFO][6283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.692 [INFO][6315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.692 [INFO][6315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.692 [INFO][6315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.699 [WARNING][6315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.699 [INFO][6315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.700 [INFO][6315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:29.704012 containerd[2793]: 2025-05-17 01:48:29.702 [INFO][6283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:48:29.704331 containerd[2793]: time="2025-05-17T01:48:29.704203846Z" level=info msg="TearDown network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" successfully" May 17 01:48:29.704331 containerd[2793]: time="2025-05-17T01:48:29.704230246Z" level=info msg="StopPodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" returns successfully" May 17 01:48:29.880738 kubelet[4322]: I0517 01:48:29.880670 4322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq8mg\" (UniqueName: \"kubernetes.io/projected/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-kube-api-access-vq8mg\") pod \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\" (UID: \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\") " May 17 01:48:29.880738 kubelet[4322]: I0517 01:48:29.880724 4322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-ca-bundle\") pod \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\" (UID: \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\") " May 17 01:48:29.880829 kubelet[4322]: I0517 01:48:29.880747 4322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-backend-key-pair\") pod \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\" (UID: \"e8ef067f-5f3a-4825-bca4-4f1da73bff0a\") " May 17 01:48:29.881115 kubelet[4322]: I0517 01:48:29.881085 4322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e8ef067f-5f3a-4825-bca4-4f1da73bff0a" (UID: "e8ef067f-5f3a-4825-bca4-4f1da73bff0a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 01:48:29.882764 kubelet[4322]: I0517 01:48:29.882737 4322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-kube-api-access-vq8mg" (OuterVolumeSpecName: "kube-api-access-vq8mg") pod "e8ef067f-5f3a-4825-bca4-4f1da73bff0a" (UID: "e8ef067f-5f3a-4825-bca4-4f1da73bff0a"). InnerVolumeSpecName "kube-api-access-vq8mg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:48:29.882851 kubelet[4322]: I0517 01:48:29.882825 4322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e8ef067f-5f3a-4825-bca4-4f1da73bff0a" (UID: "e8ef067f-5f3a-4825-bca4-4f1da73bff0a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 01:48:29.980931 kubelet[4322]: I0517 01:48:29.980908 4322 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-ca-bundle\") on node \"ci-4081.3.3-n-a9b446c9a0\" DevicePath \"\"" May 17 01:48:29.980931 kubelet[4322]: I0517 01:48:29.980930 4322 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-a9b446c9a0\" DevicePath \"\"" May 17 01:48:29.980987 kubelet[4322]: I0517 01:48:29.980941 4322 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq8mg\" (UniqueName: \"kubernetes.io/projected/e8ef067f-5f3a-4825-bca4-4f1da73bff0a-kube-api-access-vq8mg\") on node \"ci-4081.3.3-n-a9b446c9a0\" DevicePath \"\"" May 17 01:48:30.341219 systemd[1]: run-netns-cni\x2d6fbf7d05\x2de63e\x2d97ea\x2d9133\x2dd5367d1c5cff.mount: Deactivated successfully. May 17 01:48:30.341349 systemd[1]: var-lib-kubelet-pods-e8ef067f\x2d5f3a\x2d4825\x2dbca4\x2d4f1da73bff0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvq8mg.mount: Deactivated successfully. May 17 01:48:30.341445 systemd[1]: var-lib-kubelet-pods-e8ef067f\x2d5f3a\x2d4825\x2dbca4\x2d4f1da73bff0a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 01:48:30.683901 kubelet[4322]: I0517 01:48:30.683787 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26bebf9d-4188-4210-90ef-079cfef2bc0c-whisker-backend-key-pair\") pod \"whisker-76cc6bcc89-ghtr6\" (UID: \"26bebf9d-4188-4210-90ef-079cfef2bc0c\") " pod="calico-system/whisker-76cc6bcc89-ghtr6" May 17 01:48:30.683901 kubelet[4322]: I0517 01:48:30.683820 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26bebf9d-4188-4210-90ef-079cfef2bc0c-whisker-ca-bundle\") pod \"whisker-76cc6bcc89-ghtr6\" (UID: \"26bebf9d-4188-4210-90ef-079cfef2bc0c\") " pod="calico-system/whisker-76cc6bcc89-ghtr6" May 17 01:48:30.683901 kubelet[4322]: I0517 01:48:30.683856 4322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwkq\" (UniqueName: \"kubernetes.io/projected/26bebf9d-4188-4210-90ef-079cfef2bc0c-kube-api-access-9kwkq\") pod \"whisker-76cc6bcc89-ghtr6\" (UID: \"26bebf9d-4188-4210-90ef-079cfef2bc0c\") " pod="calico-system/whisker-76cc6bcc89-ghtr6" May 17 01:48:30.825202 containerd[2793]: time="2025-05-17T01:48:30.825160384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cc6bcc89-ghtr6,Uid:26bebf9d-4188-4210-90ef-079cfef2bc0c,Namespace:calico-system,Attempt:0,}" May 17 01:48:30.840082 kernel: bpftool[6630]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 01:48:30.906973 systemd-networkd[2320]: cali4820faa949f: Link UP May 17 01:48:30.907600 systemd-networkd[2320]: cali4820faa949f: Gained carrier May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.857 [INFO][6627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0 whisker-76cc6bcc89- calico-system 26bebf9d-4188-4210-90ef-079cfef2bc0c 885 0 2025-05-17 01:48:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76cc6bcc89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 whisker-76cc6bcc89-ghtr6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4820faa949f [] [] }} ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.857 [INFO][6627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.877 [INFO][6654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" HandleID="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.877 [INFO][6654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" HandleID="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001bd820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"whisker-76cc6bcc89-ghtr6", "timestamp":"2025-05-17 01:48:30.877110523 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.877 [INFO][6654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.877 [INFO][6654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.877 [INFO][6654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.885 [INFO][6654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.887 [INFO][6654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.890 [INFO][6654] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.891 [INFO][6654] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.893 [INFO][6654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.893 [INFO][6654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.894 [INFO][6654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785 May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.896 [INFO][6654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.899 [INFO][6654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.129/26] block=192.168.17.128/26 handle="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.899 [INFO][6654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.129/26] handle="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.900 [INFO][6654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:30.914821 containerd[2793]: 2025-05-17 01:48:30.900 [INFO][6654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.129/26] IPv6=[] ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" HandleID="k8s-pod-network.399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.915249 containerd[2793]: 2025-05-17 01:48:30.902 [INFO][6627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0", GenerateName:"whisker-76cc6bcc89-", Namespace:"calico-system", SelfLink:"", UID:"26bebf9d-4188-4210-90ef-079cfef2bc0c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cc6bcc89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"whisker-76cc6bcc89-ghtr6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4820faa949f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:30.915249 containerd[2793]: 2025-05-17 01:48:30.902 [INFO][6627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.129/32] ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.915249 containerd[2793]: 2025-05-17 01:48:30.902 [INFO][6627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4820faa949f ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.915249 containerd[2793]: 2025-05-17 01:48:30.907 [INFO][6627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.915249 containerd[2793]: 2025-05-17 01:48:30.907 [INFO][6627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0", GenerateName:"whisker-76cc6bcc89-", Namespace:"calico-system", SelfLink:"", UID:"26bebf9d-4188-4210-90ef-079cfef2bc0c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cc6bcc89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785", Pod:"whisker-76cc6bcc89-ghtr6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4820faa949f", MAC:"ae:ba:f0:55:b6:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:30.915249 containerd[2793]: 2025-05-17 01:48:30.912 [INFO][6627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785" Namespace="calico-system" Pod="whisker-76cc6bcc89-ghtr6" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--76cc6bcc89--ghtr6-eth0" May 17 01:48:30.926893 containerd[2793]: time="2025-05-17T01:48:30.926835618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:30.926893 containerd[2793]: time="2025-05-17T01:48:30.926885578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:30.926936 containerd[2793]: time="2025-05-17T01:48:30.926897938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:30.926996 containerd[2793]: time="2025-05-17T01:48:30.926979538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:30.969490 containerd[2793]: time="2025-05-17T01:48:30.969462459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cc6bcc89-ghtr6,Uid:26bebf9d-4188-4210-90ef-079cfef2bc0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"399b10209509a9ea1bf5a863264a7e7af29576f2d9343f46329421b58ac28785\"" May 17 01:48:30.970496 containerd[2793]: time="2025-05-17T01:48:30.970471301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:48:30.991858 containerd[2793]: time="2025-05-17T01:48:30.991821622Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:30.992103 containerd[2793]: time="2025-05-17T01:48:30.992075503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:30.992140 containerd[2793]: time="2025-05-17T01:48:30.992099183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:48:30.992295 kubelet[4322]: E0517 01:48:30.992255 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:48:30.992357 kubelet[4322]: E0517 01:48:30.992307 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:48:30.992453 kubelet[4322]: E0517 01:48:30.992421 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c14844c5913491a84860e1a0b8551a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:30.994829 containerd[2793]: time="2025-05-17T01:48:30.994807108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:48:30.999838 systemd-networkd[2320]: vxlan.calico: Link UP May 17 01:48:30.999842 systemd-networkd[2320]: vxlan.calico: Gained carrier May 17 01:48:31.016844 containerd[2793]: time="2025-05-17T01:48:31.016809268Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:31.018342 containerd[2793]: time="2025-05-17T01:48:31.018313111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:31.018392 containerd[2793]: time="2025-05-17T01:48:31.018371231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:48:31.018509 kubelet[4322]: E0517 01:48:31.018482 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:48:31.018568 kubelet[4322]: E0517 01:48:31.018520 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:48:31.018643 kubelet[4322]: E0517 01:48:31.018605 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:31.019768 kubelet[4322]: E0517 01:48:31.019741 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:48:31.490922 kubelet[4322]: E0517 01:48:31.490882 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:48:32.069178 systemd-networkd[2320]: cali4820faa949f: Gained IPv6LL May 17 01:48:32.261157 systemd-networkd[2320]: vxlan.calico: Gained IPv6LL May 17 01:48:32.432428 kubelet[4322]: I0517 01:48:32.432357 4322 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ef067f-5f3a-4825-bca4-4f1da73bff0a" path="/var/lib/kubelet/pods/e8ef067f-5f3a-4825-bca4-4f1da73bff0a/volumes" May 17 01:48:32.491570 kubelet[4322]: E0517 01:48:32.491538 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:48:38.431390 containerd[2793]: time="2025-05-17T01:48:38.431340100Z" level=info msg="StopPodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\"" May 17 01:48:38.431734 containerd[2793]: time="2025-05-17T01:48:38.431341860Z" level=info msg="StopPodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\"" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.468 [INFO][7043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.468 [INFO][7043] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" iface="eth0" netns="/var/run/netns/cni-2061530e-c8aa-14a8-cfc6-fc812b2cd865" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.469 [INFO][7043] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" iface="eth0" netns="/var/run/netns/cni-2061530e-c8aa-14a8-cfc6-fc812b2cd865" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.469 [INFO][7043] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" iface="eth0" netns="/var/run/netns/cni-2061530e-c8aa-14a8-cfc6-fc812b2cd865" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.469 [INFO][7043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.469 [INFO][7043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.487 [INFO][7084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.487 [INFO][7084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.487 [INFO][7084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.494 [WARNING][7084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.494 [INFO][7084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.495 [INFO][7084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:38.497798 containerd[2793]: 2025-05-17 01:48:38.496 [INFO][7043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:48:38.498180 containerd[2793]: time="2025-05-17T01:48:38.497999096Z" level=info msg="TearDown network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" successfully" May 17 01:48:38.498180 containerd[2793]: time="2025-05-17T01:48:38.498032496Z" level=info msg="StopPodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" returns successfully" May 17 01:48:38.498948 containerd[2793]: time="2025-05-17T01:48:38.498920977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-r5pmv,Uid:0086f012-1f79-4125-a28a-ca399f86c285,Namespace:calico-apiserver,Attempt:1,}" May 17 01:48:38.500191 systemd[1]: run-netns-cni\x2d2061530e\x2dc8aa\x2d14a8\x2dcfc6\x2dfc812b2cd865.mount: Deactivated successfully. May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.470 [INFO][7044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.470 [INFO][7044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" iface="eth0" netns="/var/run/netns/cni-f1761fe4-d3b7-a6ef-9830-fede596919b4" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.470 [INFO][7044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" iface="eth0" netns="/var/run/netns/cni-f1761fe4-d3b7-a6ef-9830-fede596919b4" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.470 [INFO][7044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" iface="eth0" netns="/var/run/netns/cni-f1761fe4-d3b7-a6ef-9830-fede596919b4" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.470 [INFO][7044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.470 [INFO][7044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.487 [INFO][7086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.487 [INFO][7086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.495 [INFO][7086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.502 [WARNING][7086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.502 [INFO][7086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.503 [INFO][7086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:38.507236 containerd[2793]: 2025-05-17 01:48:38.505 [INFO][7044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:48:38.507531 containerd[2793]: time="2025-05-17T01:48:38.507360866Z" level=info msg="TearDown network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" successfully" May 17 01:48:38.507531 containerd[2793]: time="2025-05-17T01:48:38.507389346Z" level=info msg="StopPodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" returns successfully" May 17 01:48:38.507807 containerd[2793]: time="2025-05-17T01:48:38.507782987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9kg2c,Uid:2e771814-a5f9-4a19-8f90-467aed0cea74,Namespace:kube-system,Attempt:1,}" May 17 01:48:38.509165 systemd[1]: run-netns-cni\x2df1761fe4\x2dd3b7\x2da6ef\x2d9830\x2dfede596919b4.mount: Deactivated successfully. May 17 01:48:38.587272 systemd-networkd[2320]: calia17c6729f92: Link UP May 17 01:48:38.587532 systemd-networkd[2320]: calia17c6729f92: Gained carrier May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.535 [INFO][7128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0 calico-apiserver-95d6b45b8- calico-apiserver 0086f012-1f79-4125-a28a-ca399f86c285 932 0 2025-05-17 01:48:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95d6b45b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 calico-apiserver-95d6b45b8-r5pmv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia17c6729f92 [] [] }} ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.535 [INFO][7128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" HandleID="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" HandleID="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042e9c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"calico-apiserver-95d6b45b8-r5pmv", "timestamp":"2025-05-17 01:48:38.557287683 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.565 [INFO][7188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.569 [INFO][7188] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.572 [INFO][7188] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.573 [INFO][7188] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.575 [INFO][7188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.575 [INFO][7188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.576 [INFO][7188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27 May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.580 [INFO][7188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.584 [INFO][7188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.130/26] block=192.168.17.128/26 handle="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.584 [INFO][7188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.130/26] handle="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.584 [INFO][7188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:38.596419 containerd[2793]: 2025-05-17 01:48:38.584 [INFO][7188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.130/26] IPv6=[] ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" HandleID="k8s-pod-network.d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.596906 containerd[2793]: 2025-05-17 01:48:38.585 [INFO][7128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0086f012-1f79-4125-a28a-ca399f86c285", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"calico-apiserver-95d6b45b8-r5pmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia17c6729f92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:38.596906 containerd[2793]: 2025-05-17 01:48:38.586 [INFO][7128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.130/32] ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.596906 containerd[2793]: 2025-05-17 01:48:38.586 [INFO][7128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia17c6729f92 ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.596906 containerd[2793]: 2025-05-17 01:48:38.587 [INFO][7128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.596906 containerd[2793]: 2025-05-17 01:48:38.588 [INFO][7128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0086f012-1f79-4125-a28a-ca399f86c285", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27", Pod:"calico-apiserver-95d6b45b8-r5pmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia17c6729f92", MAC:"d2:fe:82:c1:53:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:38.596906 containerd[2793]: 2025-05-17 01:48:38.595 [INFO][7128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-r5pmv" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:48:38.608468 containerd[2793]: time="2025-05-17T01:48:38.608105421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:38.608567 containerd[2793]: time="2025-05-17T01:48:38.608465622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:38.608567 containerd[2793]: time="2025-05-17T01:48:38.608478782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:38.608656 containerd[2793]: time="2025-05-17T01:48:38.608572462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:38.650730 containerd[2793]: time="2025-05-17T01:48:38.650693030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-r5pmv,Uid:0086f012-1f79-4125-a28a-ca399f86c285,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27\"" May 17 01:48:38.651727 containerd[2793]: time="2025-05-17T01:48:38.651707391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 01:48:38.687492 systemd-networkd[2320]: cali6e27f828eb9: Link UP May 17 01:48:38.687682 systemd-networkd[2320]: cali6e27f828eb9: Gained carrier May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.537 [INFO][7145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0 coredns-7c65d6cfc9- kube-system 2e771814-a5f9-4a19-8f90-467aed0cea74 933 0 2025-05-17 01:48:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 coredns-7c65d6cfc9-9kg2c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e27f828eb9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.537 [INFO][7145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" HandleID="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" HandleID="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005b6810), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"coredns-7c65d6cfc9-9kg2c", "timestamp":"2025-05-17 01:48:38.557590604 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.557 [INFO][7190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.584 [INFO][7190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.584 [INFO][7190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.666 [INFO][7190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.670 [INFO][7190] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.673 [INFO][7190] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.675 [INFO][7190] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.676 [INFO][7190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.676 [INFO][7190] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.677 [INFO][7190] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66 May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.680 [INFO][7190] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.684 [INFO][7190] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.131/26] block=192.168.17.128/26 handle="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.684 [INFO][7190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.131/26] handle="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.684 [INFO][7190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:38.695345 containerd[2793]: 2025-05-17 01:48:38.684 [INFO][7190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.131/26] IPv6=[] ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" HandleID="k8s-pod-network.02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.695818 containerd[2793]: 2025-05-17 01:48:38.685 [INFO][7145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e771814-a5f9-4a19-8f90-467aed0cea74", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"coredns-7c65d6cfc9-9kg2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e27f828eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:38.695818 containerd[2793]: 2025-05-17 01:48:38.685 [INFO][7145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.131/32] ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.695818 containerd[2793]: 2025-05-17 01:48:38.685 [INFO][7145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e27f828eb9 ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.695818 containerd[2793]: 2025-05-17 01:48:38.687 [INFO][7145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.695818 containerd[2793]: 2025-05-17 01:48:38.688 [INFO][7145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e771814-a5f9-4a19-8f90-467aed0cea74", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66", Pod:"coredns-7c65d6cfc9-9kg2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e27f828eb9", MAC:"b2:d9:ad:75:5e:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:38.695818 containerd[2793]: 2025-05-17 01:48:38.693 [INFO][7145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9kg2c" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:48:38.707857 containerd[2793]: time="2025-05-17T01:48:38.707797495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:38.707924 containerd[2793]: time="2025-05-17T01:48:38.707856135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:38.707924 containerd[2793]: time="2025-05-17T01:48:38.707868295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:38.707972 containerd[2793]: time="2025-05-17T01:48:38.707956655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:38.762605 containerd[2793]: time="2025-05-17T01:48:38.762579597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9kg2c,Uid:2e771814-a5f9-4a19-8f90-467aed0cea74,Namespace:kube-system,Attempt:1,} returns sandbox id \"02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66\"" May 17 01:48:38.764297 containerd[2793]: time="2025-05-17T01:48:38.764215479Z" level=info msg="CreateContainer within sandbox \"02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:48:38.768026 containerd[2793]: time="2025-05-17T01:48:38.768001443Z" level=info msg="CreateContainer within sandbox \"02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db6dd68de405abe13d65e291c5fbbe6a8b855da12284e2e74bf018a0d352e1bf\"" May 17 01:48:38.768392 containerd[2793]: time="2025-05-17T01:48:38.768371404Z" level=info msg="StartContainer for \"db6dd68de405abe13d65e291c5fbbe6a8b855da12284e2e74bf018a0d352e1bf\"" May 17 01:48:38.805317 containerd[2793]: time="2025-05-17T01:48:38.805269166Z" level=info msg="StartContainer for \"db6dd68de405abe13d65e291c5fbbe6a8b855da12284e2e74bf018a0d352e1bf\" returns successfully" May 17 01:48:39.512134 kubelet[4322]: I0517 01:48:39.512083 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9kg2c" podStartSLOduration=29.512060815 podStartE2EDuration="29.512060815s" podCreationTimestamp="2025-05-17 01:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:39.511545934 +0000 UTC m=+37.149987990" watchObservedRunningTime="2025-05-17 01:48:39.512060815 +0000 UTC m=+37.150502871" May 17 01:48:39.585799 containerd[2793]: time="2025-05-17T01:48:39.585755653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:39.586125 containerd[2793]: time="2025-05-17T01:48:39.585786133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 17 01:48:39.586545 containerd[2793]: time="2025-05-17T01:48:39.586522854Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:39.588601 containerd[2793]: time="2025-05-17T01:48:39.588578296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:39.589188 containerd[2793]: time="2025-05-17T01:48:39.589167657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 937.430906ms" May 17 01:48:39.589211 containerd[2793]: time="2025-05-17T01:48:39.589195857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 01:48:39.590739 containerd[2793]: time="2025-05-17T01:48:39.590678099Z" level=info msg="CreateContainer within sandbox \"d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 01:48:39.593918 containerd[2793]: time="2025-05-17T01:48:39.593892622Z" level=info msg="CreateContainer within sandbox \"d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1d20236f034f69beeec2ebea58c0e75328eeedaf2d3a57221e836c887086cf45\"" May 17 01:48:39.594194 containerd[2793]: time="2025-05-17T01:48:39.594176382Z" level=info msg="StartContainer for \"1d20236f034f69beeec2ebea58c0e75328eeedaf2d3a57221e836c887086cf45\"" May 17 01:48:39.647828 containerd[2793]: time="2025-05-17T01:48:39.647797120Z" level=info msg="StartContainer for \"1d20236f034f69beeec2ebea58c0e75328eeedaf2d3a57221e836c887086cf45\" returns successfully" May 17 01:48:40.006181 systemd-networkd[2320]: calia17c6729f92: Gained IPv6LL May 17 01:48:40.431300 containerd[2793]: time="2025-05-17T01:48:40.431188408Z" level=info msg="StopPodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\"" May 17 01:48:40.431300 containerd[2793]: time="2025-05-17T01:48:40.431188488Z" level=info msg="StopPodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\"" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" iface="eth0" netns="/var/run/netns/cni-7de81a06-78da-efba-ddd9-3f0f20b80341" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" iface="eth0" netns="/var/run/netns/cni-7de81a06-78da-efba-ddd9-3f0f20b80341" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" iface="eth0" netns="/var/run/netns/cni-7de81a06-78da-efba-ddd9-3f0f20b80341" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.485 [INFO][7555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.485 [INFO][7555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.485 [INFO][7555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.494 [WARNING][7555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.494 [INFO][7555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.495 [INFO][7555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:40.498290 containerd[2793]: 2025-05-17 01:48:40.497 [INFO][7510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:48:40.498642 containerd[2793]: time="2025-05-17T01:48:40.498460675Z" level=info msg="TearDown network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" successfully" May 17 01:48:40.498642 containerd[2793]: time="2025-05-17T01:48:40.498496435Z" level=info msg="StopPodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" returns successfully" May 17 01:48:40.499051 containerd[2793]: time="2025-05-17T01:48:40.499030475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcb6n,Uid:d016dc5a-5728-4bc3-95ba-213735e255c5,Namespace:calico-system,Attempt:1,}" May 17 01:48:40.500336 systemd[1]: run-netns-cni\x2d7de81a06\x2d78da\x2defba\x2dddd9\x2d3f0f20b80341.mount: Deactivated successfully. May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" iface="eth0" netns="/var/run/netns/cni-d4dfb158-fdd0-c4bf-9ce6-e0d0a5942f26" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" iface="eth0" netns="/var/run/netns/cni-d4dfb158-fdd0-c4bf-9ce6-e0d0a5942f26" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" iface="eth0" netns="/var/run/netns/cni-d4dfb158-fdd0-c4bf-9ce6-e0d0a5942f26" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.467 [INFO][7511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.468 [INFO][7511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.485 [INFO][7556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.485 [INFO][7556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.495 [INFO][7556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.503 [WARNING][7556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.503 [INFO][7556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.504 [INFO][7556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:40.507622 containerd[2793]: 2025-05-17 01:48:40.506 [INFO][7511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:48:40.507932 containerd[2793]: time="2025-05-17T01:48:40.507747604Z" level=info msg="TearDown network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" successfully" May 17 01:48:40.507932 containerd[2793]: time="2025-05-17T01:48:40.507770044Z" level=info msg="StopPodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" returns successfully" May 17 01:48:40.508123 containerd[2793]: time="2025-05-17T01:48:40.508099725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvq6d,Uid:17a15b4f-bd7a-48c1-8f89-54ee79c68bdb,Namespace:kube-system,Attempt:1,}" May 17 01:48:40.509631 systemd[1]: run-netns-cni\x2dd4dfb158\x2dfdd0\x2dc4bf\x2d9ce6\x2de0d0a5942f26.mount: Deactivated successfully. May 17 01:48:40.513848 kubelet[4322]: I0517 01:48:40.513799 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-95d6b45b8-r5pmv" podStartSLOduration=20.575540263 podStartE2EDuration="21.51378121s" podCreationTimestamp="2025-05-17 01:48:19 +0000 UTC" firstStartedPulling="2025-05-17 01:48:38.651528551 +0000 UTC m=+36.289970607" lastFinishedPulling="2025-05-17 01:48:39.589769498 +0000 UTC m=+37.228211554" observedRunningTime="2025-05-17 01:48:40.51374833 +0000 UTC m=+38.152190346" watchObservedRunningTime="2025-05-17 01:48:40.51378121 +0000 UTC m=+38.152223226" May 17 01:48:40.579027 systemd-networkd[2320]: cali87ed4325bb9: Link UP May 17 01:48:40.579350 systemd-networkd[2320]: cali87ed4325bb9: Gained carrier May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.530 [INFO][7596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0 csi-node-driver- calico-system d016dc5a-5728-4bc3-95ba-213735e255c5 961 0 2025-05-17 01:48:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 csi-node-driver-hcb6n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87ed4325bb9 [] [] }} ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.530 [INFO][7596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.551 [INFO][7650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" HandleID="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.551 [INFO][7650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" HandleID="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"csi-node-driver-hcb6n", "timestamp":"2025-05-17 01:48:40.551675688 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.551 [INFO][7650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.551 [INFO][7650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.551 [INFO][7650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.559 [INFO][7650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.562 [INFO][7650] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.565 [INFO][7650] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.566 [INFO][7650] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.568 [INFO][7650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.568 [INFO][7650] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.569 [INFO][7650] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.572 [INFO][7650] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.575 [INFO][7650] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.132/26] block=192.168.17.128/26 handle="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.575 [INFO][7650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.132/26] handle="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.576 [INFO][7650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:40.600005 containerd[2793]: 2025-05-17 01:48:40.576 [INFO][7650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.132/26] IPv6=[] ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" HandleID="k8s-pod-network.976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.600710 containerd[2793]: 2025-05-17 01:48:40.577 [INFO][7596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d016dc5a-5728-4bc3-95ba-213735e255c5", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"csi-node-driver-hcb6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87ed4325bb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:40.600710 containerd[2793]: 2025-05-17 01:48:40.577 [INFO][7596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.132/32] ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.600710 containerd[2793]: 2025-05-17 01:48:40.577 [INFO][7596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87ed4325bb9 ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.600710 containerd[2793]: 2025-05-17 01:48:40.580 [INFO][7596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.600710 containerd[2793]: 2025-05-17 01:48:40.580 [INFO][7596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d016dc5a-5728-4bc3-95ba-213735e255c5", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e", Pod:"csi-node-driver-hcb6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87ed4325bb9", MAC:"ca:22:ba:ce:20:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:40.600710 containerd[2793]: 2025-05-17 01:48:40.598 [INFO][7596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e" Namespace="calico-system" Pod="csi-node-driver-hcb6n" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:48:40.612073 containerd[2793]: time="2025-05-17T01:48:40.612006509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:40.612073 containerd[2793]: time="2025-05-17T01:48:40.612053189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:40.612073 containerd[2793]: time="2025-05-17T01:48:40.612067109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:40.612196 containerd[2793]: time="2025-05-17T01:48:40.612157429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:40.644892 containerd[2793]: time="2025-05-17T01:48:40.644865261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcb6n,Uid:d016dc5a-5728-4bc3-95ba-213735e255c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e\"" May 17 01:48:40.645125 systemd-networkd[2320]: cali6e27f828eb9: Gained IPv6LL May 17 01:48:40.645869 containerd[2793]: time="2025-05-17T01:48:40.645843622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 01:48:40.681676 systemd-networkd[2320]: cali331fceb44b6: Link UP May 17 01:48:40.681906 systemd-networkd[2320]: cali331fceb44b6: Gained carrier May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.538 [INFO][7612] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0 coredns-7c65d6cfc9- kube-system 17a15b4f-bd7a-48c1-8f89-54ee79c68bdb 960 0 2025-05-17 01:48:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 coredns-7c65d6cfc9-lvq6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali331fceb44b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.538 [INFO][7612] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.557 [INFO][7661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" HandleID="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.557 [INFO][7661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" HandleID="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400072abd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"coredns-7c65d6cfc9-lvq6d", "timestamp":"2025-05-17 01:48:40.557771974 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.557 [INFO][7661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.576 [INFO][7661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.576 [INFO][7661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.661 [INFO][7661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.664 [INFO][7661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.667 [INFO][7661] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.668 [INFO][7661] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.670 [INFO][7661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.670 [INFO][7661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.671 [INFO][7661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50 May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.673 [INFO][7661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.677 [INFO][7661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.133/26] block=192.168.17.128/26 handle="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.677 [INFO][7661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.133/26] handle="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.677 [INFO][7661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:40.691148 containerd[2793]: 2025-05-17 01:48:40.677 [INFO][7661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.133/26] IPv6=[] ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" HandleID="k8s-pod-network.e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.691609 containerd[2793]: 2025-05-17 01:48:40.679 [INFO][7612] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"coredns-7c65d6cfc9-lvq6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali331fceb44b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:40.691609 containerd[2793]: 2025-05-17 01:48:40.679 [INFO][7612] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.133/32] ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.691609 containerd[2793]: 2025-05-17 01:48:40.679 [INFO][7612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali331fceb44b6 ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.691609 containerd[2793]: 2025-05-17 01:48:40.682 [INFO][7612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.691609 containerd[2793]: 2025-05-17 01:48:40.682 [INFO][7612] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50", Pod:"coredns-7c65d6cfc9-lvq6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali331fceb44b6", MAC:"4e:76:9c:cd:8a:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:40.691609 containerd[2793]: 2025-05-17 01:48:40.688 [INFO][7612] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvq6d" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:48:40.703311 containerd[2793]: time="2025-05-17T01:48:40.703252280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:40.703311 containerd[2793]: time="2025-05-17T01:48:40.703306360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:40.703360 containerd[2793]: time="2025-05-17T01:48:40.703317800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:40.703423 containerd[2793]: time="2025-05-17T01:48:40.703404920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:40.753420 containerd[2793]: time="2025-05-17T01:48:40.753389450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvq6d,Uid:17a15b4f-bd7a-48c1-8f89-54ee79c68bdb,Namespace:kube-system,Attempt:1,} returns sandbox id \"e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50\"" May 17 01:48:40.755220 containerd[2793]: time="2025-05-17T01:48:40.755198652Z" level=info msg="CreateContainer within sandbox \"e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:48:40.760265 containerd[2793]: time="2025-05-17T01:48:40.760237137Z" level=info msg="CreateContainer within sandbox \"e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe5baec1a7581ee37be9bee0a61ae0f9f7ed5bed67d2323e70a64da7a87d1a56\"" May 17 01:48:40.760574 containerd[2793]: time="2025-05-17T01:48:40.760545417Z" level=info msg="StartContainer for \"fe5baec1a7581ee37be9bee0a61ae0f9f7ed5bed67d2323e70a64da7a87d1a56\"" May 17 01:48:40.806033 containerd[2793]: time="2025-05-17T01:48:40.806007303Z" level=info msg="StartContainer for \"fe5baec1a7581ee37be9bee0a61ae0f9f7ed5bed67d2323e70a64da7a87d1a56\" returns successfully" May 17 01:48:40.946715 containerd[2793]: time="2025-05-17T01:48:40.946619844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:40.946715 containerd[2793]: time="2025-05-17T01:48:40.946689724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 17 01:48:40.947317 containerd[2793]: time="2025-05-17T01:48:40.947299244Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:40.949140 containerd[2793]: time="2025-05-17T01:48:40.949108926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:40.949914 containerd[2793]: time="2025-05-17T01:48:40.949887447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 304.012225ms" May 17 01:48:40.949939 containerd[2793]: time="2025-05-17T01:48:40.949918447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 01:48:40.951513 containerd[2793]: time="2025-05-17T01:48:40.951487928Z" level=info msg="CreateContainer within sandbox \"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 01:48:40.966056 containerd[2793]: time="2025-05-17T01:48:40.966030223Z" level=info msg="CreateContainer within sandbox \"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"342bb3b3654d40dfa59d28c388f1ff133b772ae035d93b2b4b883e6597a59651\"" May 17 01:48:40.966412 containerd[2793]: time="2025-05-17T01:48:40.966388863Z" level=info msg="StartContainer for \"342bb3b3654d40dfa59d28c388f1ff133b772ae035d93b2b4b883e6597a59651\"" May 17 01:48:41.008449 containerd[2793]: time="2025-05-17T01:48:41.008421545Z" level=info msg="StartContainer for \"342bb3b3654d40dfa59d28c388f1ff133b772ae035d93b2b4b883e6597a59651\" returns successfully" May 17 01:48:41.009210 containerd[2793]: time="2025-05-17T01:48:41.009191866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 01:48:41.382593 containerd[2793]: time="2025-05-17T01:48:41.382553696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:41.382713 containerd[2793]: time="2025-05-17T01:48:41.382592576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 17 01:48:41.383312 containerd[2793]: time="2025-05-17T01:48:41.383286977Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:41.385116 containerd[2793]: time="2025-05-17T01:48:41.385088539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:41.385902 containerd[2793]: time="2025-05-17T01:48:41.385877339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 376.659673ms" May 17 01:48:41.385926 containerd[2793]: time="2025-05-17T01:48:41.385909219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 01:48:41.387588 containerd[2793]: time="2025-05-17T01:48:41.387567061Z" level=info msg="CreateContainer within sandbox \"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 01:48:41.395473 containerd[2793]: time="2025-05-17T01:48:41.395449228Z" level=info msg="CreateContainer within sandbox \"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d2b1c64151ea1d8c6ae8abacdea6f6ecad544c8bbb785d03da34ad79529b3dea\"" May 17 01:48:41.395748 containerd[2793]: time="2025-05-17T01:48:41.395724349Z" level=info msg="StartContainer for \"d2b1c64151ea1d8c6ae8abacdea6f6ecad544c8bbb785d03da34ad79529b3dea\"" May 17 01:48:41.430547 containerd[2793]: time="2025-05-17T01:48:41.430507901Z" level=info msg="StopPodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\"" May 17 01:48:41.430604 containerd[2793]: time="2025-05-17T01:48:41.430573661Z" level=info msg="StopPodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\"" May 17 01:48:41.430638 containerd[2793]: time="2025-05-17T01:48:41.430615941Z" level=info msg="StopPodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\"" May 17 01:48:41.440660 containerd[2793]: time="2025-05-17T01:48:41.440632951Z" level=info msg="StartContainer for \"d2b1c64151ea1d8c6ae8abacdea6f6ecad544c8bbb785d03da34ad79529b3dea\" returns successfully" May 17 01:48:41.485483 kubelet[4322]: I0517 01:48:41.485453 4322 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 01:48:41.485483 kubelet[4322]: I0517 01:48:41.485484 4322 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" iface="eth0" netns="/var/run/netns/cni-6bbc236b-aa6a-4a5f-212d-40ee876d3387" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" iface="eth0" netns="/var/run/netns/cni-6bbc236b-aa6a-4a5f-212d-40ee876d3387" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" iface="eth0" netns="/var/run/netns/cni-6bbc236b-aa6a-4a5f-212d-40ee876d3387" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.483 [INFO][8042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.483 [INFO][8042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.483 [INFO][8042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.491 [WARNING][8042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.491 [INFO][8042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.492 [INFO][8042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:41.495426 containerd[2793]: 2025-05-17 01:48:41.494 [INFO][7967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:48:41.495712 containerd[2793]: time="2025-05-17T01:48:41.495647042Z" level=info msg="TearDown network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" successfully" May 17 01:48:41.495712 containerd[2793]: time="2025-05-17T01:48:41.495674282Z" level=info msg="StopPodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" returns successfully" May 17 01:48:41.496240 containerd[2793]: time="2025-05-17T01:48:41.496211643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc858bc7-skfq5,Uid:3b2192c3-fd97-4d0f-b686-fbf27564a7ef,Namespace:calico-system,Attempt:1,}" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7968] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" iface="eth0" netns="/var/run/netns/cni-7c75b8a0-0335-cbc6-450e-a72d1ad2e440" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7968] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" iface="eth0" netns="/var/run/netns/cni-7c75b8a0-0335-cbc6-450e-a72d1ad2e440" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7968] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" iface="eth0" netns="/var/run/netns/cni-7c75b8a0-0335-cbc6-450e-a72d1ad2e440" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.484 [INFO][8041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.484 [INFO][8041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.492 [INFO][8041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.499 [WARNING][8041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.499 [INFO][8041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.500 [INFO][8041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:41.503869 containerd[2793]: 2025-05-17 01:48:41.502 [INFO][7968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:48:41.504157 containerd[2793]: time="2025-05-17T01:48:41.504021450Z" level=info msg="TearDown network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" successfully" May 17 01:48:41.504157 containerd[2793]: time="2025-05-17T01:48:41.504043970Z" level=info msg="StopPodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" returns successfully" May 17 01:48:41.504488 containerd[2793]: time="2025-05-17T01:48:41.504461771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-fsg7s,Uid:a5e46d9f-129d-4d2e-be7f-85655ce91f55,Namespace:calico-apiserver,Attempt:1,}" May 17 01:48:41.505008 systemd[1]: run-netns-cni\x2d6bbc236b\x2daa6a\x2d4a5f\x2d212d\x2d40ee876d3387.mount: Deactivated successfully. May 17 01:48:41.508335 systemd[1]: run-netns-cni\x2d7c75b8a0\x2d0335\x2dcbc6\x2d450e\x2da72d1ad2e440.mount: Deactivated successfully. May 17 01:48:41.510298 kubelet[4322]: I0517 01:48:41.510280 4322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" iface="eth0" netns="/var/run/netns/cni-db273a11-fb45-8640-e079-14adf7e43c7e" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" iface="eth0" netns="/var/run/netns/cni-db273a11-fb45-8640-e079-14adf7e43c7e" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" iface="eth0" netns="/var/run/netns/cni-db273a11-fb45-8640-e079-14adf7e43c7e" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.466 [INFO][7966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.484 [INFO][8044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.484 [INFO][8044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.500 [INFO][8044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.507 [WARNING][8044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.507 [INFO][8044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.508 [INFO][8044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:41.512358 containerd[2793]: 2025-05-17 01:48:41.510 [INFO][7966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:48:41.512626 containerd[2793]: time="2025-05-17T01:48:41.512537418Z" level=info msg="TearDown network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" successfully" May 17 01:48:41.512626 containerd[2793]: time="2025-05-17T01:48:41.512559458Z" level=info msg="StopPodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" returns successfully" May 17 01:48:41.513773 containerd[2793]: time="2025-05-17T01:48:41.513730539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-dqq55,Uid:167f036e-0c64-4fc2-a584-1f781d3f336f,Namespace:calico-system,Attempt:1,}" May 17 01:48:41.514494 systemd[1]: run-netns-cni\x2ddb273a11\x2dfb45\x2d8640\x2de079\x2d14adf7e43c7e.mount: Deactivated successfully. May 17 01:48:41.517278 kubelet[4322]: I0517 01:48:41.517234 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lvq6d" podStartSLOduration=31.517216943 podStartE2EDuration="31.517216943s" podCreationTimestamp="2025-05-17 01:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:41.516863662 +0000 UTC m=+39.155305718" watchObservedRunningTime="2025-05-17 01:48:41.517216943 +0000 UTC m=+39.155658959" May 17 01:48:41.525119 kubelet[4322]: I0517 01:48:41.524884 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hcb6n" podStartSLOduration=17.784065832 podStartE2EDuration="18.52486935s" podCreationTimestamp="2025-05-17 01:48:23 +0000 UTC" firstStartedPulling="2025-05-17 01:48:40.645679502 +0000 UTC m=+38.284121558" lastFinishedPulling="2025-05-17 01:48:41.38648302 +0000 UTC m=+39.024925076" observedRunningTime="2025-05-17 01:48:41.52456839 +0000 UTC m=+39.163010446" watchObservedRunningTime="2025-05-17 01:48:41.52486935 +0000 UTC m=+39.163311406" May 17 01:48:41.577331 systemd-networkd[2320]: cali649aba103d0: Link UP May 17 01:48:41.577598 systemd-networkd[2320]: cali649aba103d0: Gained carrier May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.528 [INFO][8103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0 calico-kube-controllers-78fc858bc7- calico-system 3b2192c3-fd97-4d0f-b686-fbf27564a7ef 988 0 2025-05-17 01:48:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78fc858bc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 calico-kube-controllers-78fc858bc7-skfq5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali649aba103d0 [] [] }} ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.528 [INFO][8103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.549 [INFO][8172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" HandleID="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.550 [INFO][8172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" HandleID="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dcd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"calico-kube-controllers-78fc858bc7-skfq5", "timestamp":"2025-05-17 01:48:41.549928413 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.550 [INFO][8172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.550 [INFO][8172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.550 [INFO][8172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.557 [INFO][8172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.560 [INFO][8172] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.563 [INFO][8172] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.564 [INFO][8172] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.566 [INFO][8172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.566 [INFO][8172] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.567 [INFO][8172] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2 May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.569 [INFO][8172] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.573 [INFO][8172] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.134/26] block=192.168.17.128/26 handle="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.573 [INFO][8172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.134/26] handle="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.573 [INFO][8172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:41.585288 containerd[2793]: 2025-05-17 01:48:41.573 [INFO][8172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.134/26] IPv6=[] ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" HandleID="k8s-pod-network.e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.585744 containerd[2793]: 2025-05-17 01:48:41.575 [INFO][8103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0", GenerateName:"calico-kube-controllers-78fc858bc7-", Namespace:"calico-system", SelfLink:"", UID:"3b2192c3-fd97-4d0f-b686-fbf27564a7ef", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fc858bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"calico-kube-controllers-78fc858bc7-skfq5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali649aba103d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:41.585744 containerd[2793]: 2025-05-17 01:48:41.575 [INFO][8103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.134/32] ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.585744 containerd[2793]: 2025-05-17 01:48:41.575 [INFO][8103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali649aba103d0 ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.585744 containerd[2793]: 2025-05-17 01:48:41.577 [INFO][8103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.585744 containerd[2793]: 2025-05-17 01:48:41.577 [INFO][8103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0", GenerateName:"calico-kube-controllers-78fc858bc7-", Namespace:"calico-system", SelfLink:"", UID:"3b2192c3-fd97-4d0f-b686-fbf27564a7ef", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fc858bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2", Pod:"calico-kube-controllers-78fc858bc7-skfq5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali649aba103d0", MAC:"1a:e3:53:46:5b:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:41.585744 containerd[2793]: 2025-05-17 01:48:41.583 [INFO][8103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2" Namespace="calico-system" Pod="calico-kube-controllers-78fc858bc7-skfq5" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:48:41.598327 containerd[2793]: time="2025-05-17T01:48:41.598253459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:41.598327 containerd[2793]: time="2025-05-17T01:48:41.598311099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:41.598377 containerd[2793]: time="2025-05-17T01:48:41.598322419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:41.598431 containerd[2793]: time="2025-05-17T01:48:41.598412499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:41.643223 containerd[2793]: time="2025-05-17T01:48:41.643151101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc858bc7-skfq5,Uid:3b2192c3-fd97-4d0f-b686-fbf27564a7ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2\"" May 17 01:48:41.644241 containerd[2793]: time="2025-05-17T01:48:41.644221502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 01:48:41.677863 systemd-networkd[2320]: calif7493be704c: Link UP May 17 01:48:41.678364 systemd-networkd[2320]: calif7493be704c: Gained carrier May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.548 [INFO][8131] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0 calico-apiserver-95d6b45b8- calico-apiserver a5e46d9f-129d-4d2e-be7f-85655ce91f55 987 0 2025-05-17 01:48:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95d6b45b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 calico-apiserver-95d6b45b8-fsg7s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7493be704c [] [] }} ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.548 [INFO][8131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.569 [INFO][8204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" HandleID="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.569 [INFO][8204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" HandleID="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"calico-apiserver-95d6b45b8-fsg7s", "timestamp":"2025-05-17 01:48:41.569132311 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.569 [INFO][8204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.573 [INFO][8204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.573 [INFO][8204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.658 [INFO][8204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.661 [INFO][8204] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.664 [INFO][8204] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.666 [INFO][8204] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.667 [INFO][8204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.667 [INFO][8204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.668 [INFO][8204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9 May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.671 [INFO][8204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.674 [INFO][8204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.135/26] block=192.168.17.128/26 handle="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.675 [INFO][8204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.135/26] handle="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.675 [INFO][8204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:41.686537 containerd[2793]: 2025-05-17 01:48:41.675 [INFO][8204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.135/26] IPv6=[] ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" HandleID="k8s-pod-network.5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.686950 containerd[2793]: 2025-05-17 01:48:41.676 [INFO][8131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5e46d9f-129d-4d2e-be7f-85655ce91f55", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"calico-apiserver-95d6b45b8-fsg7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7493be704c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:41.686950 containerd[2793]: 2025-05-17 01:48:41.676 [INFO][8131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.135/32] ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.686950 containerd[2793]: 2025-05-17 01:48:41.676 [INFO][8131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7493be704c ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.686950 containerd[2793]: 2025-05-17 01:48:41.679 [INFO][8131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.686950 containerd[2793]: 2025-05-17 01:48:41.679 [INFO][8131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5e46d9f-129d-4d2e-be7f-85655ce91f55", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9", Pod:"calico-apiserver-95d6b45b8-fsg7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7493be704c", MAC:"be:5e:8b:fe:e8:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:41.686950 containerd[2793]: 2025-05-17 01:48:41.685 [INFO][8131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9" Namespace="calico-apiserver" Pod="calico-apiserver-95d6b45b8-fsg7s" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:48:41.699093 containerd[2793]: time="2025-05-17T01:48:41.698754153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:41.699124 containerd[2793]: time="2025-05-17T01:48:41.699089113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:41.699124 containerd[2793]: time="2025-05-17T01:48:41.699103753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:41.699210 containerd[2793]: time="2025-05-17T01:48:41.699195273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:41.748297 containerd[2793]: time="2025-05-17T01:48:41.748267480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d6b45b8-fsg7s,Uid:a5e46d9f-129d-4d2e-be7f-85655ce91f55,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9\"" May 17 01:48:41.750121 containerd[2793]: time="2025-05-17T01:48:41.750098641Z" level=info msg="CreateContainer within sandbox \"5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 01:48:41.753369 containerd[2793]: time="2025-05-17T01:48:41.753345564Z" level=info msg="CreateContainer within sandbox \"5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b94c73c084900b1a9e2c00fe69c906a3ab8eaee7cbaa9071c3593290e2d25386\"" May 17 01:48:41.753679 containerd[2793]: time="2025-05-17T01:48:41.753655365Z" level=info msg="StartContainer for \"b94c73c084900b1a9e2c00fe69c906a3ab8eaee7cbaa9071c3593290e2d25386\"" May 17 01:48:41.779362 systemd-networkd[2320]: cali265a3674f6b: Link UP May 17 01:48:41.779635 systemd-networkd[2320]: cali265a3674f6b: Gained carrier May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.552 [INFO][8133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0 goldmane-8f77d7b6c- calico-system 167f036e-0c64-4fc2-a584-1f781d3f336f 989 0 2025-05-17 01:48:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-a9b446c9a0 goldmane-8f77d7b6c-dqq55 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali265a3674f6b [] [] }} ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.552 [INFO][8133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.572 [INFO][8210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" HandleID="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.572 [INFO][8210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" HandleID="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40008b1d80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-a9b446c9a0", "pod":"goldmane-8f77d7b6c-dqq55", "timestamp":"2025-05-17 01:48:41.572548155 +0000 UTC"}, Hostname:"ci-4081.3.3-n-a9b446c9a0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.572 [INFO][8210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.675 [INFO][8210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.675 [INFO][8210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-a9b446c9a0' May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.759 [INFO][8210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.762 [INFO][8210] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.765 [INFO][8210] ipam/ipam.go 511: Trying affinity for 192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.766 [INFO][8210] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.768 [INFO][8210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.128/26 host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.768 [INFO][8210] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.17.128/26 handle="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.769 [INFO][8210] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7 May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.771 [INFO][8210] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.17.128/26 handle="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.776 [INFO][8210] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.17.136/26] block=192.168.17.128/26 handle="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.776 [INFO][8210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.136/26] handle="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" host="ci-4081.3.3-n-a9b446c9a0" May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.776 [INFO][8210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:48:41.788346 containerd[2793]: 2025-05-17 01:48:41.776 [INFO][8210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.136/26] IPv6=[] ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" HandleID="k8s-pod-network.804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.788800 containerd[2793]: 2025-05-17 01:48:41.777 [INFO][8133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"167f036e-0c64-4fc2-a584-1f781d3f336f", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"", Pod:"goldmane-8f77d7b6c-dqq55", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali265a3674f6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:41.788800 containerd[2793]: 2025-05-17 01:48:41.778 [INFO][8133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.136/32] ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.788800 containerd[2793]: 2025-05-17 01:48:41.778 [INFO][8133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali265a3674f6b ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.788800 containerd[2793]: 2025-05-17 01:48:41.779 [INFO][8133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.788800 containerd[2793]: 2025-05-17 01:48:41.780 [INFO][8133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"167f036e-0c64-4fc2-a584-1f781d3f336f", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7", Pod:"goldmane-8f77d7b6c-dqq55", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali265a3674f6b", MAC:"d6:4e:84:2a:e7:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:48:41.788800 containerd[2793]: 2025-05-17 01:48:41.786 [INFO][8133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-dqq55" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:48:41.800957 containerd[2793]: time="2025-05-17T01:48:41.800764249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:48:41.800957 containerd[2793]: time="2025-05-17T01:48:41.800813369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:48:41.800957 containerd[2793]: time="2025-05-17T01:48:41.800824409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:41.800957 containerd[2793]: time="2025-05-17T01:48:41.800906129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:48:41.806827 containerd[2793]: time="2025-05-17T01:48:41.806796934Z" level=info msg="StartContainer for \"b94c73c084900b1a9e2c00fe69c906a3ab8eaee7cbaa9071c3593290e2d25386\" returns successfully" May 17 01:48:41.829814 containerd[2793]: time="2025-05-17T01:48:41.829785796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-dqq55,Uid:167f036e-0c64-4fc2-a584-1f781d3f336f,Namespace:calico-system,Attempt:1,} returns sandbox id \"804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7\"" May 17 01:48:41.862171 systemd-networkd[2320]: cali331fceb44b6: Gained IPv6LL May 17 01:48:42.053158 systemd-networkd[2320]: cali87ed4325bb9: Gained IPv6LL May 17 01:48:42.522946 kubelet[4322]: I0517 01:48:42.522745 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-95d6b45b8-fsg7s" podStartSLOduration=23.522730136 podStartE2EDuration="23.522730136s" podCreationTimestamp="2025-05-17 01:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:48:42.522370976 +0000 UTC m=+40.160813072" watchObservedRunningTime="2025-05-17 01:48:42.522730136 +0000 UTC m=+40.161172192" May 17 01:48:42.628190 containerd[2793]: time="2025-05-17T01:48:42.628148269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:42.628312 containerd[2793]: time="2025-05-17T01:48:42.628252829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 17 01:48:42.628934 containerd[2793]: time="2025-05-17T01:48:42.628915349Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:42.630626 containerd[2793]: time="2025-05-17T01:48:42.630604911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:48:42.631326 containerd[2793]: time="2025-05-17T01:48:42.631303191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 987.051009ms" May 17 01:48:42.631352 containerd[2793]: time="2025-05-17T01:48:42.631334272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 01:48:42.632014 containerd[2793]: time="2025-05-17T01:48:42.631996232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:48:42.636554 containerd[2793]: time="2025-05-17T01:48:42.636533796Z" level=info msg="CreateContainer within sandbox \"e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 01:48:42.639967 containerd[2793]: time="2025-05-17T01:48:42.639939199Z" level=info msg="CreateContainer within sandbox \"e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"abac2868a6b67ab62dd2192a99c09aa9ec1562e54252e3c9382dab18f2bac267\"" May 17 01:48:42.640282 containerd[2793]: time="2025-05-17T01:48:42.640261199Z" level=info msg="StartContainer for \"abac2868a6b67ab62dd2192a99c09aa9ec1562e54252e3c9382dab18f2bac267\"" May 17 01:48:42.653765 containerd[2793]: time="2025-05-17T01:48:42.653733091Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:42.667202 containerd[2793]: time="2025-05-17T01:48:42.653947491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:42.667202 containerd[2793]: time="2025-05-17T01:48:42.654012331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:48:42.667249 kubelet[4322]: E0517 01:48:42.654082 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:48:42.667249 kubelet[4322]: E0517 01:48:42.654118 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:48:42.667249 kubelet[4322]: E0517 01:48:42.654230 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:42.667249 kubelet[4322]: E0517 01:48:42.655359 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:48:42.691890 containerd[2793]: time="2025-05-17T01:48:42.691861245Z" level=info msg="StartContainer for \"abac2868a6b67ab62dd2192a99c09aa9ec1562e54252e3c9382dab18f2bac267\" returns successfully" May 17 01:48:42.821208 systemd-networkd[2320]: cali649aba103d0: Gained IPv6LL May 17 01:48:43.517978 kubelet[4322]: I0517 01:48:43.517955 4322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:48:43.518635 kubelet[4322]: E0517 01:48:43.518610 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:48:43.534443 kubelet[4322]: I0517 01:48:43.534403 4322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78fc858bc7-skfq5" podStartSLOduration=19.546547947 podStartE2EDuration="20.534389997s" podCreationTimestamp="2025-05-17 01:48:23 +0000 UTC" firstStartedPulling="2025-05-17 01:48:41.644047262 +0000 UTC m=+39.282489278" lastFinishedPulling="2025-05-17 01:48:42.631889272 +0000 UTC m=+40.270331328" observedRunningTime="2025-05-17 01:48:43.534317397 +0000 UTC m=+41.172759413" watchObservedRunningTime="2025-05-17 01:48:43.534389997 +0000 UTC m=+41.172832053" May 17 01:48:43.653152 systemd-networkd[2320]: calif7493be704c: Gained IPv6LL May 17 01:48:43.717133 systemd-networkd[2320]: cali265a3674f6b: Gained IPv6LL May 17 01:48:48.431420 containerd[2793]: time="2025-05-17T01:48:48.431370175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:48:48.456058 containerd[2793]: time="2025-05-17T01:48:48.456013270Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:48.456292 containerd[2793]: time="2025-05-17T01:48:48.456262350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:48.456356 containerd[2793]: time="2025-05-17T01:48:48.456331750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:48:48.456442 kubelet[4322]: E0517 01:48:48.456399 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:48:48.456673 kubelet[4322]: E0517 01:48:48.456452 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:48:48.456673 kubelet[4322]: E0517 01:48:48.456552 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c14844c5913491a84860e1a0b8551a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:48.458157 containerd[2793]: time="2025-05-17T01:48:48.458125391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:48:48.484413 containerd[2793]: time="2025-05-17T01:48:48.484368167Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:48.484622 containerd[2793]: time="2025-05-17T01:48:48.484592727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:48.484690 containerd[2793]: time="2025-05-17T01:48:48.484655367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:48:48.484771 kubelet[4322]: E0517 01:48:48.484738 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:48:48.484813 kubelet[4322]: E0517 01:48:48.484777 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:48:48.484892 kubelet[4322]: E0517 01:48:48.484858 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:48.486014 kubelet[4322]: E0517 01:48:48.485985 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:48:56.431577 containerd[2793]: time="2025-05-17T01:48:56.431528127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:48:56.459859 containerd[2793]: time="2025-05-17T01:48:56.459754297Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:56.469198 containerd[2793]: time="2025-05-17T01:48:56.469160300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:56.469257 containerd[2793]: time="2025-05-17T01:48:56.469230220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:48:56.469402 kubelet[4322]: E0517 01:48:56.469352 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:48:56.469670 kubelet[4322]: E0517 01:48:56.469419 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:48:56.469670 kubelet[4322]: E0517 01:48:56.469529 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:56.470663 kubelet[4322]: E0517 01:48:56.470624 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:48:59.396123 kubelet[4322]: I0517 01:48:59.396042 4322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:49:00.431911 kubelet[4322]: E0517 01:49:00.431873 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:49:02.426579 containerd[2793]: time="2025-05-17T01:49:02.426548748Z" level=info msg="StopPodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\"" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.456 [WARNING][8653] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d016dc5a-5728-4bc3-95ba-213735e255c5", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e", Pod:"csi-node-driver-hcb6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87ed4325bb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.456 [INFO][8653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.456 [INFO][8653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" iface="eth0" netns="" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.456 [INFO][8653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.456 [INFO][8653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.473 [INFO][8674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.473 [INFO][8674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.473 [INFO][8674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.482 [WARNING][8674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.482 [INFO][8674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.485 [INFO][8674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.488419 containerd[2793]: 2025-05-17 01:49:02.487 [INFO][8653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.488801 containerd[2793]: time="2025-05-17T01:49:02.488454683Z" level=info msg="TearDown network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" successfully" May 17 01:49:02.488801 containerd[2793]: time="2025-05-17T01:49:02.488478083Z" level=info msg="StopPodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" returns successfully" May 17 01:49:02.488894 containerd[2793]: time="2025-05-17T01:49:02.488871763Z" level=info msg="RemovePodSandbox for \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\"" May 17 01:49:02.488918 containerd[2793]: time="2025-05-17T01:49:02.488902683Z" level=info msg="Forcibly stopping sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\"" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.518 [WARNING][8704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d016dc5a-5728-4bc3-95ba-213735e255c5", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"976bb845d9bbc078d532e6976cc8f2583766857890ee1320f73e8fd693ca635e", Pod:"csi-node-driver-hcb6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87ed4325bb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.518 [INFO][8704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.518 [INFO][8704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" iface="eth0" netns="" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.518 [INFO][8704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.518 [INFO][8704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.535 [INFO][8726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.535 [INFO][8726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.535 [INFO][8726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.544 [WARNING][8726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.544 [INFO][8726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" HandleID="k8s-pod-network.30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-csi--node--driver--hcb6n-eth0" May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.547 [INFO][8726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.549947 containerd[2793]: 2025-05-17 01:49:02.548 [INFO][8704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973" May 17 01:49:02.550387 containerd[2793]: time="2025-05-17T01:49:02.549971858Z" level=info msg="TearDown network for sandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" successfully" May 17 01:49:02.551740 containerd[2793]: time="2025-05-17T01:49:02.551713538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:02.551779 containerd[2793]: time="2025-05-17T01:49:02.551766258Z" level=info msg="RemovePodSandbox \"30860280fe8a4f161c8d7ed8bddbb4ccab85b524295b1176286b1313d205d973\" returns successfully" May 17 01:49:02.552060 containerd[2793]: time="2025-05-17T01:49:02.552035498Z" level=info msg="StopPodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\"" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.583 [WARNING][8756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e771814-a5f9-4a19-8f90-467aed0cea74", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66", Pod:"coredns-7c65d6cfc9-9kg2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e27f828eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.583 [INFO][8756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.583 [INFO][8756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" iface="eth0" netns="" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.583 [INFO][8756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.583 [INFO][8756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.601 [INFO][8777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.601 [INFO][8777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.601 [INFO][8777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.608 [WARNING][8777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.608 [INFO][8777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.609 [INFO][8777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.612311 containerd[2793]: 2025-05-17 01:49:02.610 [INFO][8756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.612590 containerd[2793]: time="2025-05-17T01:49:02.612345113Z" level=info msg="TearDown network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" successfully" May 17 01:49:02.612590 containerd[2793]: time="2025-05-17T01:49:02.612365873Z" level=info msg="StopPodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" returns successfully" May 17 01:49:02.612673 containerd[2793]: time="2025-05-17T01:49:02.612652553Z" level=info msg="RemovePodSandbox for \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\"" May 17 01:49:02.612699 containerd[2793]: time="2025-05-17T01:49:02.612677793Z" level=info msg="Forcibly stopping sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\"" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.643 [WARNING][8807] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2e771814-a5f9-4a19-8f90-467aed0cea74", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"02a5da00f6abb41c8636c50c180713ca2cd08f8ba011ce73b9e7b6f3b7748b66", Pod:"coredns-7c65d6cfc9-9kg2c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e27f828eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.643 [INFO][8807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.643 [INFO][8807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" iface="eth0" netns="" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.643 [INFO][8807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.643 [INFO][8807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.661 [INFO][8829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.661 [INFO][8829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.661 [INFO][8829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.668 [WARNING][8829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.668 [INFO][8829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" HandleID="k8s-pod-network.be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--9kg2c-eth0" May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.669 [INFO][8829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.672684 containerd[2793]: 2025-05-17 01:49:02.671 [INFO][8807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8" May 17 01:49:02.673038 containerd[2793]: time="2025-05-17T01:49:02.672720607Z" level=info msg="TearDown network for sandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" successfully" May 17 01:49:02.674258 containerd[2793]: time="2025-05-17T01:49:02.674229248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:02.674292 containerd[2793]: time="2025-05-17T01:49:02.674279328Z" level=info msg="RemovePodSandbox \"be292a9c0f75b518bd67f8201c82943a508d6a33b7c93a2ec4e4db54ee9dd1b8\" returns successfully" May 17 01:49:02.674729 containerd[2793]: time="2025-05-17T01:49:02.674705728Z" level=info msg="StopPodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\"" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.704 [WARNING][8860] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0", GenerateName:"calico-kube-controllers-78fc858bc7-", Namespace:"calico-system", SelfLink:"", UID:"3b2192c3-fd97-4d0f-b686-fbf27564a7ef", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fc858bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2", Pod:"calico-kube-controllers-78fc858bc7-skfq5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali649aba103d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.704 [INFO][8860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.704 [INFO][8860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" iface="eth0" netns="" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.704 [INFO][8860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.704 [INFO][8860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.722 [INFO][8881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.722 [INFO][8881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.722 [INFO][8881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.729 [WARNING][8881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.729 [INFO][8881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.730 [INFO][8881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.733168 containerd[2793]: 2025-05-17 01:49:02.731 [INFO][8860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.733168 containerd[2793]: time="2025-05-17T01:49:02.733150622Z" level=info msg="TearDown network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" successfully" May 17 01:49:02.733602 containerd[2793]: time="2025-05-17T01:49:02.733172222Z" level=info msg="StopPodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" returns successfully" May 17 01:49:02.733602 containerd[2793]: time="2025-05-17T01:49:02.733510182Z" level=info msg="RemovePodSandbox for \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\"" May 17 01:49:02.733602 containerd[2793]: time="2025-05-17T01:49:02.733542742Z" level=info msg="Forcibly stopping sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\"" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.763 [WARNING][8910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0", GenerateName:"calico-kube-controllers-78fc858bc7-", Namespace:"calico-system", SelfLink:"", UID:"3b2192c3-fd97-4d0f-b686-fbf27564a7ef", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fc858bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"e7582b43c9e7ea9358de0f96e08b3f4affcbb6c66b166b364ca70e1df431ceb2", Pod:"calico-kube-controllers-78fc858bc7-skfq5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali649aba103d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.763 [INFO][8910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.763 [INFO][8910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" iface="eth0" netns="" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.763 [INFO][8910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.764 [INFO][8910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.781 [INFO][8931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.781 [INFO][8931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.781 [INFO][8931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.788 [WARNING][8931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.788 [INFO][8931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" HandleID="k8s-pod-network.9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--kube--controllers--78fc858bc7--skfq5-eth0" May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.789 [INFO][8931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.792275 containerd[2793]: 2025-05-17 01:49:02.790 [INFO][8910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9" May 17 01:49:02.792555 containerd[2793]: time="2025-05-17T01:49:02.792312276Z" level=info msg="TearDown network for sandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" successfully" May 17 01:49:02.794448 containerd[2793]: time="2025-05-17T01:49:02.794404397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:02.794584 containerd[2793]: time="2025-05-17T01:49:02.794542877Z" level=info msg="RemovePodSandbox \"9f3241b2ae8807ade4e2816c0f0b374f35469e9988231b40a5a2a0c0ecd8eaf9\" returns successfully" May 17 01:49:02.795832 containerd[2793]: time="2025-05-17T01:49:02.795802077Z" level=info msg="StopPodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\"" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.825 [WARNING][8963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5e46d9f-129d-4d2e-be7f-85655ce91f55", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9", Pod:"calico-apiserver-95d6b45b8-fsg7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7493be704c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.825 [INFO][8963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.826 [INFO][8963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" iface="eth0" netns="" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.826 [INFO][8963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.826 [INFO][8963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.843 [INFO][8986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.843 [INFO][8986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.843 [INFO][8986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.851 [WARNING][8986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.851 [INFO][8986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.852 [INFO][8986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.854851 containerd[2793]: 2025-05-17 01:49:02.853 [INFO][8963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.855286 containerd[2793]: time="2025-05-17T01:49:02.854892492Z" level=info msg="TearDown network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" successfully" May 17 01:49:02.855286 containerd[2793]: time="2025-05-17T01:49:02.854917692Z" level=info msg="StopPodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" returns successfully" May 17 01:49:02.855333 containerd[2793]: time="2025-05-17T01:49:02.855289292Z" level=info msg="RemovePodSandbox for \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\"" May 17 01:49:02.855333 containerd[2793]: time="2025-05-17T01:49:02.855322492Z" level=info msg="Forcibly stopping sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\"" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.886 [WARNING][9016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5e46d9f-129d-4d2e-be7f-85655ce91f55", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"5a01f21995261e9bdc15d29adcd5b3e8cba44ef05199ca225f42cbbd29ff49f9", Pod:"calico-apiserver-95d6b45b8-fsg7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7493be704c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.886 [INFO][9016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.886 [INFO][9016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" iface="eth0" netns="" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.886 [INFO][9016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.886 [INFO][9016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.904 [INFO][9039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.904 [INFO][9039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.904 [INFO][9039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.911 [WARNING][9039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.911 [INFO][9039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" HandleID="k8s-pod-network.e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--fsg7s-eth0" May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.912 [INFO][9039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:02.915055 containerd[2793]: 2025-05-17 01:49:02.913 [INFO][9016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9" May 17 01:49:02.915443 containerd[2793]: time="2025-05-17T01:49:02.915108386Z" level=info msg="TearDown network for sandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" successfully" May 17 01:49:02.957038 containerd[2793]: time="2025-05-17T01:49:02.956996636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:02.957108 containerd[2793]: time="2025-05-17T01:49:02.957086196Z" level=info msg="RemovePodSandbox \"e74ddc12ce72a3261fb401d3de91ba83489b96776469e5e60a648f900406f4e9\" returns successfully" May 17 01:49:02.957478 containerd[2793]: time="2025-05-17T01:49:02.957452676Z" level=info msg="StopPodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\"" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:02.989 [WARNING][9070] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50", Pod:"coredns-7c65d6cfc9-lvq6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali331fceb44b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:02.989 [INFO][9070] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:02.989 [INFO][9070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" iface="eth0" netns="" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:02.989 [INFO][9070] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:02.989 [INFO][9070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.007 [INFO][9091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.007 [INFO][9091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.007 [INFO][9091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.015 [WARNING][9091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.015 [INFO][9091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.016 [INFO][9091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.018629 containerd[2793]: 2025-05-17 01:49:03.017 [INFO][9070] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.018925 containerd[2793]: time="2025-05-17T01:49:03.018673531Z" level=info msg="TearDown network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" successfully" May 17 01:49:03.018925 containerd[2793]: time="2025-05-17T01:49:03.018703411Z" level=info msg="StopPodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" returns successfully" May 17 01:49:03.019100 containerd[2793]: time="2025-05-17T01:49:03.019058611Z" level=info msg="RemovePodSandbox for \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\"" May 17 01:49:03.019129 containerd[2793]: time="2025-05-17T01:49:03.019105891Z" level=info msg="Forcibly stopping sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\"" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.049 [WARNING][9120] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"17a15b4f-bd7a-48c1-8f89-54ee79c68bdb", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"e933539df9206a8968460ad202b1ddef1a9331a9bc1ffe53d7d450da0e2dda50", Pod:"coredns-7c65d6cfc9-lvq6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali331fceb44b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.049 [INFO][9120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.049 [INFO][9120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" iface="eth0" netns="" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.049 [INFO][9120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.049 [INFO][9120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.067 [INFO][9142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.067 [INFO][9142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.067 [INFO][9142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.074 [WARNING][9142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.074 [INFO][9142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" HandleID="k8s-pod-network.3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-coredns--7c65d6cfc9--lvq6d-eth0" May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.075 [INFO][9142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.078482 containerd[2793]: 2025-05-17 01:49:03.077 [INFO][9120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c" May 17 01:49:03.078778 containerd[2793]: time="2025-05-17T01:49:03.078530584Z" level=info msg="TearDown network for sandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" successfully" May 17 01:49:03.103435 containerd[2793]: time="2025-05-17T01:49:03.103393950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:03.103513 containerd[2793]: time="2025-05-17T01:49:03.103470790Z" level=info msg="RemovePodSandbox \"3e323d081efe9cd043e49a766e2f2d8935683838022f1b89a4150ffbfc676c0c\" returns successfully" May 17 01:49:03.103873 containerd[2793]: time="2025-05-17T01:49:03.103847470Z" level=info msg="StopPodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\"" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.133 [WARNING][9172] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.133 [INFO][9172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.133 [INFO][9172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" iface="eth0" netns="" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.133 [INFO][9172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.133 [INFO][9172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.151 [INFO][9194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.152 [INFO][9194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.152 [INFO][9194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.159 [WARNING][9194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.159 [INFO][9194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.160 [INFO][9194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.163388 containerd[2793]: 2025-05-17 01:49:03.162 [INFO][9172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.163713 containerd[2793]: time="2025-05-17T01:49:03.163418084Z" level=info msg="TearDown network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" successfully" May 17 01:49:03.163713 containerd[2793]: time="2025-05-17T01:49:03.163439484Z" level=info msg="StopPodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" returns successfully" May 17 01:49:03.163845 containerd[2793]: time="2025-05-17T01:49:03.163813924Z" level=info msg="RemovePodSandbox for \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\"" May 17 01:49:03.163870 containerd[2793]: time="2025-05-17T01:49:03.163854044Z" level=info msg="Forcibly stopping sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\"" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.193 [WARNING][9225] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" WorkloadEndpoint="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.193 [INFO][9225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.194 [INFO][9225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" iface="eth0" netns="" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.194 [INFO][9225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.194 [INFO][9225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.211 [INFO][9247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.211 [INFO][9247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.211 [INFO][9247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.220 [WARNING][9247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.220 [INFO][9247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" HandleID="k8s-pod-network.235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-whisker--7558ffd48f--k4ql5-eth0" May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.221 [INFO][9247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.223746 containerd[2793]: 2025-05-17 01:49:03.222 [INFO][9225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565" May 17 01:49:03.224143 containerd[2793]: time="2025-05-17T01:49:03.223787657Z" level=info msg="TearDown network for sandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" successfully" May 17 01:49:03.238984 containerd[2793]: time="2025-05-17T01:49:03.238951501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:03.239037 containerd[2793]: time="2025-05-17T01:49:03.239006981Z" level=info msg="RemovePodSandbox \"235a3a11732ee2df683d77d6d0840781e2ab5e8539e150f3deda838eceac4565\" returns successfully" May 17 01:49:03.239404 containerd[2793]: time="2025-05-17T01:49:03.239379901Z" level=info msg="StopPodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\"" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.269 [WARNING][9277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"167f036e-0c64-4fc2-a584-1f781d3f336f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7", Pod:"goldmane-8f77d7b6c-dqq55", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali265a3674f6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.269 [INFO][9277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.269 [INFO][9277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" iface="eth0" netns="" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.269 [INFO][9277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.269 [INFO][9277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.287 [INFO][9296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.287 [INFO][9296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.287 [INFO][9296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.294 [WARNING][9296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.294 [INFO][9296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.295 [INFO][9296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.298368 containerd[2793]: 2025-05-17 01:49:03.297 [INFO][9277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.298657 containerd[2793]: time="2025-05-17T01:49:03.298363714Z" level=info msg="TearDown network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" successfully" May 17 01:49:03.298657 containerd[2793]: time="2025-05-17T01:49:03.298382714Z" level=info msg="StopPodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" returns successfully" May 17 01:49:03.298745 containerd[2793]: time="2025-05-17T01:49:03.298719794Z" level=info msg="RemovePodSandbox for \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\"" May 17 01:49:03.298771 containerd[2793]: time="2025-05-17T01:49:03.298750314Z" level=info msg="Forcibly stopping sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\"" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.328 [WARNING][9323] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"167f036e-0c64-4fc2-a584-1f781d3f336f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"804564f2167aa9e8dccfd9d611fd5ffbc19397ba50bba22948b0eaea58679bf7", Pod:"goldmane-8f77d7b6c-dqq55", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali265a3674f6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.328 [INFO][9323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.328 [INFO][9323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" iface="eth0" netns="" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.328 [INFO][9323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.328 [INFO][9323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.346 [INFO][9343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.346 [INFO][9343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.346 [INFO][9343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.354 [WARNING][9343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.354 [INFO][9343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" HandleID="k8s-pod-network.72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-goldmane--8f77d7b6c--dqq55-eth0" May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.355 [INFO][9343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.357748 containerd[2793]: 2025-05-17 01:49:03.356 [INFO][9323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933" May 17 01:49:03.358015 containerd[2793]: time="2025-05-17T01:49:03.357781568Z" level=info msg="TearDown network for sandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" successfully" May 17 01:49:03.372409 containerd[2793]: time="2025-05-17T01:49:03.372377251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:03.372458 containerd[2793]: time="2025-05-17T01:49:03.372430491Z" level=info msg="RemovePodSandbox \"72da3a86fd6641857df9ddd79e2dfbe190636346991e89c62a0614accea6c933\" returns successfully" May 17 01:49:03.372834 containerd[2793]: time="2025-05-17T01:49:03.372805371Z" level=info msg="StopPodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\"" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.402 [WARNING][9372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0086f012-1f79-4125-a28a-ca399f86c285", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27", Pod:"calico-apiserver-95d6b45b8-r5pmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia17c6729f92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.402 [INFO][9372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.402 [INFO][9372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" iface="eth0" netns="" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.402 [INFO][9372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.402 [INFO][9372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.419 [INFO][9394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.419 [INFO][9394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.419 [INFO][9394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.427 [WARNING][9394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.427 [INFO][9394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.428 [INFO][9394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.430885 containerd[2793]: 2025-05-17 01:49:03.429 [INFO][9372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.431464 containerd[2793]: time="2025-05-17T01:49:03.430910264Z" level=info msg="TearDown network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" successfully" May 17 01:49:03.431464 containerd[2793]: time="2025-05-17T01:49:03.430931664Z" level=info msg="StopPodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" returns successfully" May 17 01:49:03.431464 containerd[2793]: time="2025-05-17T01:49:03.431254145Z" level=info msg="RemovePodSandbox for \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\"" May 17 01:49:03.431464 containerd[2793]: time="2025-05-17T01:49:03.431279825Z" level=info msg="Forcibly stopping sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\"" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.459 [WARNING][9423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0", GenerateName:"calico-apiserver-95d6b45b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0086f012-1f79-4125-a28a-ca399f86c285", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d6b45b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-a9b446c9a0", ContainerID:"d6c50242073d4de37f65f2ea739f239d6664877c30e7b3ebfd10dc6a660dab27", Pod:"calico-apiserver-95d6b45b8-r5pmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia17c6729f92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.460 [INFO][9423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.460 [INFO][9423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" iface="eth0" netns="" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.460 [INFO][9423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.460 [INFO][9423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.477 [INFO][9443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.477 [INFO][9443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.477 [INFO][9443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.484 [WARNING][9443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.484 [INFO][9443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" HandleID="k8s-pod-network.1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" Workload="ci--4081.3.3--n--a9b446c9a0-k8s-calico--apiserver--95d6b45b8--r5pmv-eth0" May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.485 [INFO][9443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:49:03.487756 containerd[2793]: 2025-05-17 01:49:03.486 [INFO][9423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1" May 17 01:49:03.488131 containerd[2793]: time="2025-05-17T01:49:03.487790997Z" level=info msg="TearDown network for sandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" successfully" May 17 01:49:03.489303 containerd[2793]: time="2025-05-17T01:49:03.489275558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:49:03.489347 containerd[2793]: time="2025-05-17T01:49:03.489333278Z" level=info msg="RemovePodSandbox \"1c5f090cde6cc346c028846b2bfa3b42c30663dae5d4ae3dc07fbffbb1164ae1\" returns successfully" May 17 01:49:09.431110 kubelet[4322]: E0517 01:49:09.431066 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:49:10.071215 kubelet[4322]: I0517 01:49:10.071182 4322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:49:15.431317 containerd[2793]: time="2025-05-17T01:49:15.431235592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:49:15.460689 containerd[2793]: time="2025-05-17T01:49:15.460590409Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:49:15.460891 containerd[2793]: time="2025-05-17T01:49:15.460869769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:49:15.460962 containerd[2793]: time="2025-05-17T01:49:15.460934609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:49:15.461076 kubelet[4322]: E0517 01:49:15.461024 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:49:15.461343 kubelet[4322]: E0517 01:49:15.461096 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:49:15.461343 kubelet[4322]: E0517 01:49:15.461213 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c14844c5913491a84860e1a0b8551a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:49:15.462869 containerd[2793]: time="2025-05-17T01:49:15.462849413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:49:15.489058 containerd[2793]: time="2025-05-17T01:49:15.489001663Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:49:15.499809 containerd[2793]: time="2025-05-17T01:49:15.499772924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:49:15.499911 containerd[2793]: time="2025-05-17T01:49:15.499867164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:49:15.500013 kubelet[4322]: E0517 01:49:15.499969 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:49:15.500067 kubelet[4322]: E0517 01:49:15.500031 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:49:15.500165 kubelet[4322]: E0517 01:49:15.500130 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:49:15.501289 kubelet[4322]: E0517 01:49:15.501264 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:49:20.431940 containerd[2793]: time="2025-05-17T01:49:20.431897633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:49:20.479746 containerd[2793]: time="2025-05-17T01:49:20.479692834Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:49:20.479983 containerd[2793]: time="2025-05-17T01:49:20.479953794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:49:20.480056 containerd[2793]: time="2025-05-17T01:49:20.480028994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:49:20.480140 kubelet[4322]: E0517 01:49:20.480102 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:49:20.480376 kubelet[4322]: E0517 01:49:20.480149 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:49:20.480376 kubelet[4322]: E0517 01:49:20.480262 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:49:20.481409 kubelet[4322]: E0517 01:49:20.481388 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:49:27.432206 kubelet[4322]: E0517 01:49:27.432139 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:49:33.431961 kubelet[4322]: E0517 01:49:33.431914 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:49:41.432010 kubelet[4322]: E0517 01:49:41.431966 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:49:44.431594 kubelet[4322]: E0517 01:49:44.431555 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:49:54.431423 kubelet[4322]: E0517 01:49:54.431371 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:49:55.431113 kubelet[4322]: E0517 01:49:55.431068 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:50:06.431635 containerd[2793]: time="2025-05-17T01:50:06.431590947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:50:06.458933 containerd[2793]: time="2025-05-17T01:50:06.458792923Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:50:06.466917 containerd[2793]: time="2025-05-17T01:50:06.466879208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:50:06.466988 containerd[2793]: time="2025-05-17T01:50:06.466940528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:50:06.467135 kubelet[4322]: E0517 01:50:06.467085 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:50:06.467429 kubelet[4322]: E0517 01:50:06.467148 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:50:06.467429 kubelet[4322]: E0517 01:50:06.467301 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:50:06.468444 kubelet[4322]: E0517 01:50:06.468423 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:50:09.431515 containerd[2793]: time="2025-05-17T01:50:09.431441124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:50:09.475197 containerd[2793]: time="2025-05-17T01:50:09.475158108Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:50:09.475452 containerd[2793]: time="2025-05-17T01:50:09.475431789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:50:09.475529 containerd[2793]: time="2025-05-17T01:50:09.475503989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:50:09.475580 kubelet[4322]: E0517 01:50:09.475553 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:50:09.475882 kubelet[4322]: E0517 01:50:09.475592 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:50:09.475882 kubelet[4322]: E0517 01:50:09.475676 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c14844c5913491a84860e1a0b8551a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:50:09.477786 containerd[2793]: time="2025-05-17T01:50:09.477770550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:50:09.500487 containerd[2793]: time="2025-05-17T01:50:09.500463002Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:50:09.500674 containerd[2793]: time="2025-05-17T01:50:09.500652562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:50:09.500729 containerd[2793]: time="2025-05-17T01:50:09.500715922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:50:09.500851 kubelet[4322]: E0517 01:50:09.500805 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:50:09.500934 kubelet[4322]: E0517 01:50:09.500860 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:50:09.501033 kubelet[4322]: E0517 01:50:09.500978 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:50:09.502143 kubelet[4322]: E0517 01:50:09.502110 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:50:14.715355 systemd[1]: Started sshd@9-147.28.150.2:22-218.92.0.158:11271.service - OpenSSH per-connection server daemon (218.92.0.158:11271). May 17 01:50:16.300359 sshd[9692]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:50:18.306778 sshd[9690]: PAM: Permission denied for root from 218.92.0.158 May 17 01:50:18.734683 sshd[9693]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:50:20.681065 sshd[9690]: PAM: Permission denied for root from 218.92.0.158 May 17 01:50:21.109285 sshd[9694]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:50:21.431356 kubelet[4322]: E0517 01:50:21.431257 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:50:23.000316 sshd[9690]: PAM: Permission denied for root from 218.92.0.158 May 17 01:50:23.214218 sshd[9690]: Received disconnect from 218.92.0.158 port 11271:11: [preauth] May 17 01:50:23.214218 sshd[9690]: Disconnected from authenticating user root 218.92.0.158 port 11271 [preauth] May 17 01:50:23.216622 systemd[1]: sshd@9-147.28.150.2:22-218.92.0.158:11271.service: Deactivated successfully. May 17 01:50:23.432211 kubelet[4322]: E0517 01:50:23.432178 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:50:32.433562 kubelet[4322]: E0517 01:50:32.433406 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:50:34.431290 kubelet[4322]: E0517 01:50:34.431249 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:50:45.431250 kubelet[4322]: E0517 01:50:45.431209 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:50:48.433274 kubelet[4322]: E0517 01:50:48.433229 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:50:56.431197 kubelet[4322]: E0517 01:50:56.431146 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:50:59.431664 kubelet[4322]: E0517 01:50:59.431626 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:51:10.431334 kubelet[4322]: E0517 01:51:10.431292 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:51:12.431967 kubelet[4322]: E0517 01:51:12.431914 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:51:24.431004 kubelet[4322]: E0517 01:51:24.430949 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:51:26.431465 kubelet[4322]: E0517 01:51:26.431398 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:51:38.431562 containerd[2793]: time="2025-05-17T01:51:38.431523338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:51:38.470198 containerd[2793]: time="2025-05-17T01:51:38.470148188Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:51:38.483663 containerd[2793]: time="2025-05-17T01:51:38.483631912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:51:38.483751 containerd[2793]: time="2025-05-17T01:51:38.483708312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:51:38.483840 kubelet[4322]: E0517 01:51:38.483791 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:51:38.484182 kubelet[4322]: E0517 01:51:38.483848 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:51:38.484182 kubelet[4322]: E0517 01:51:38.484044 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:51:38.484290 containerd[2793]: time="2025-05-17T01:51:38.484105192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:51:38.485207 kubelet[4322]: E0517 01:51:38.485186 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:51:38.508925 containerd[2793]: time="2025-05-17T01:51:38.508881078Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:51:38.509184 containerd[2793]: time="2025-05-17T01:51:38.509159558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:51:38.509247 containerd[2793]: time="2025-05-17T01:51:38.509221758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:51:38.509343 kubelet[4322]: E0517 01:51:38.509305 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:51:38.509385 kubelet[4322]: E0517 01:51:38.509353 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:51:38.509514 kubelet[4322]: E0517 01:51:38.509462 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c14844c5913491a84860e1a0b8551a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:51:38.511145 containerd[2793]: time="2025-05-17T01:51:38.511127239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:51:38.535845 containerd[2793]: time="2025-05-17T01:51:38.535797285Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:51:38.536047 containerd[2793]: time="2025-05-17T01:51:38.536023766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:51:38.536125 containerd[2793]: time="2025-05-17T01:51:38.536096726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:51:38.536213 kubelet[4322]: E0517 01:51:38.536178 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:51:38.536260 kubelet[4322]: E0517 01:51:38.536222 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:51:38.536390 kubelet[4322]: E0517 01:51:38.536335 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:51:38.537532 kubelet[4322]: E0517 01:51:38.537500 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:51:50.736365 systemd[1]: Started sshd@10-147.28.150.2:22-36.110.172.218:36670.service - OpenSSH per-connection server daemon (36.110.172.218:36670). May 17 01:51:50.774239 systemd[1]: Started sshd@11-147.28.150.2:22-123.30.249.49:45351.service - OpenSSH per-connection server daemon (123.30.249.49:45351). May 17 01:51:51.431823 kubelet[4322]: E0517 01:51:51.431774 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:51:52.170928 sshd[9946]: Invalid user faisal from 123.30.249.49 port 45351 May 17 01:51:52.431930 kubelet[4322]: E0517 01:51:52.431872 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:51:52.436767 sshd[9946]: Received disconnect from 123.30.249.49 port 45351:11: Bye Bye [preauth] May 17 01:51:52.436767 sshd[9946]: Disconnected from invalid user faisal 123.30.249.49 port 45351 [preauth] May 17 01:51:52.438669 systemd[1]: sshd@11-147.28.150.2:22-123.30.249.49:45351.service: Deactivated successfully. May 17 01:51:55.950553 sshd[9944]: Invalid user elena from 36.110.172.218 port 36670 May 17 01:51:56.781437 sshd[9944]: Received disconnect from 36.110.172.218 port 36670:11: Bye Bye [preauth] May 17 01:51:56.781636 sshd[9944]: Disconnected from invalid user elena 36.110.172.218 port 36670 [preauth] May 17 01:51:56.783357 systemd[1]: sshd@10-147.28.150.2:22-36.110.172.218:36670.service: Deactivated successfully. May 17 01:52:02.431502 kubelet[4322]: E0517 01:52:02.431466 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:52:06.431462 kubelet[4322]: E0517 01:52:06.431430 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:52:14.433904 kubelet[4322]: E0517 01:52:14.433846 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:52:20.431546 kubelet[4322]: E0517 01:52:20.431503 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:52:25.432089 kubelet[4322]: E0517 01:52:25.432017 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:52:33.431611 kubelet[4322]: E0517 01:52:33.431561 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:52:38.219351 systemd[1]: Started sshd@12-147.28.150.2:22-218.92.0.158:53551.service - OpenSSH per-connection server daemon (218.92.0.158:53551). May 17 01:52:39.431774 kubelet[4322]: E0517 01:52:39.431737 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:52:39.749263 sshd[10089]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:52:41.720588 sshd[10086]: PAM: Permission denied for root from 218.92.0.158 May 17 01:52:41.731032 update_engine[2783]: I20250517 01:52:41.730876 2783 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 01:52:41.731032 update_engine[2783]: I20250517 01:52:41.730937 2783 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 01:52:41.731435 update_engine[2783]: I20250517 01:52:41.731184 2783 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 01:52:41.731543 update_engine[2783]: I20250517 01:52:41.731520 2783 omaha_request_params.cc:62] Current group set to lts May 17 01:52:41.731708 update_engine[2783]: I20250517 01:52:41.731689 2783 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 01:52:41.732380 update_engine[2783]: I20250517 01:52:41.731760 2783 update_attempter.cc:643] Scheduling an action processor start. May 17 01:52:41.732380 update_engine[2783]: I20250517 01:52:41.731783 2783 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:52:41.732380 update_engine[2783]: I20250517 01:52:41.731814 2783 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 01:52:41.732380 update_engine[2783]: I20250517 01:52:41.731869 2783 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 01:52:41.732380 update_engine[2783]: I20250517 01:52:41.731876 2783 omaha_request_action.cc:272] Request: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: May 17 01:52:41.732380 update_engine[2783]: I20250517 01:52:41.731884 2783 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:52:41.732721 locksmithd[2816]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 01:52:41.732927 update_engine[2783]: I20250517 01:52:41.732813 2783 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:52:41.733102 update_engine[2783]: I20250517 01:52:41.733067 2783 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:52:41.733695 update_engine[2783]: E20250517 01:52:41.733668 2783 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:52:41.733815 update_engine[2783]: I20250517 01:52:41.733798 2783 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 01:52:42.132160 sshd[10112]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:52:44.043382 sshd[10086]: PAM: Permission denied for root from 218.92.0.158 May 17 01:52:44.458416 sshd[10116]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:52:45.782351 sshd[10086]: PAM: Permission denied for root from 218.92.0.158 May 17 01:52:45.988093 sshd[10086]: Received disconnect from 218.92.0.158 port 53551:11: [preauth] May 17 01:52:45.988093 sshd[10086]: Disconnected from authenticating user root 218.92.0.158 port 53551 [preauth] May 17 01:52:45.989727 systemd[1]: sshd@12-147.28.150.2:22-218.92.0.158:53551.service: Deactivated successfully. May 17 01:52:47.430875 kubelet[4322]: E0517 01:52:47.430837 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:52:51.640867 update_engine[2783]: I20250517 01:52:51.640795 2783 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:52:51.641363 update_engine[2783]: I20250517 01:52:51.641037 2783 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:52:51.641363 update_engine[2783]: I20250517 01:52:51.641240 2783 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:52:51.642150 update_engine[2783]: E20250517 01:52:51.642131 2783 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:52:51.642180 update_engine[2783]: I20250517 01:52:51.642172 2783 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 01:52:54.432221 kubelet[4322]: E0517 01:52:54.432159 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:53:01.431852 kubelet[4322]: E0517 01:53:01.431801 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:53:01.640265 update_engine[2783]: I20250517 01:53:01.640206 2783 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:53:01.640536 update_engine[2783]: I20250517 01:53:01.640491 2783 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:53:01.640681 update_engine[2783]: I20250517 01:53:01.640657 2783 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:53:01.641351 update_engine[2783]: E20250517 01:53:01.641333 2783 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:53:01.641385 update_engine[2783]: I20250517 01:53:01.641373 2783 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 01:53:05.431516 kubelet[4322]: E0517 01:53:05.431469 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:53:11.640535 update_engine[2783]: I20250517 01:53:11.640408 2783 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:53:11.640915 update_engine[2783]: I20250517 01:53:11.640672 2783 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:53:11.640915 update_engine[2783]: I20250517 01:53:11.640856 2783 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:53:11.641438 update_engine[2783]: E20250517 01:53:11.641417 2783 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:53:11.641481 update_engine[2783]: I20250517 01:53:11.641458 2783 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:53:11.641481 update_engine[2783]: I20250517 01:53:11.641465 2783 omaha_request_action.cc:617] Omaha request response: May 17 01:53:11.641551 update_engine[2783]: E20250517 01:53:11.641538 2783 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 01:53:11.641581 update_engine[2783]: I20250517 01:53:11.641556 2783 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 01:53:11.641581 update_engine[2783]: I20250517 01:53:11.641561 2783 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:53:11.641581 update_engine[2783]: I20250517 01:53:11.641566 2783 update_attempter.cc:306] Processing Done. May 17 01:53:11.641646 update_engine[2783]: E20250517 01:53:11.641580 2783 update_attempter.cc:619] Update failed. May 17 01:53:11.641646 update_engine[2783]: I20250517 01:53:11.641587 2783 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 01:53:11.641646 update_engine[2783]: I20250517 01:53:11.641590 2783 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 01:53:11.641646 update_engine[2783]: I20250517 01:53:11.641595 2783 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 01:53:11.641722 update_engine[2783]: I20250517 01:53:11.641654 2783 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:53:11.641722 update_engine[2783]: I20250517 01:53:11.641675 2783 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 01:53:11.641722 update_engine[2783]: I20250517 01:53:11.641680 2783 omaha_request_action.cc:272] Request: May 17 01:53:11.641722 update_engine[2783]: May 17 01:53:11.641722 update_engine[2783]: May 17 01:53:11.641722 update_engine[2783]: May 17 01:53:11.641722 update_engine[2783]: May 17 01:53:11.641722 update_engine[2783]: May 17 01:53:11.641722 update_engine[2783]: May 17 01:53:11.641722 update_engine[2783]: I20250517 01:53:11.641686 2783 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:53:11.641897 update_engine[2783]: I20250517 01:53:11.641799 2783 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:53:11.641919 locksmithd[2816]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 01:53:11.642148 update_engine[2783]: I20250517 01:53:11.641934 2783 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:53:11.642585 update_engine[2783]: E20250517 01:53:11.642566 2783 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:53:11.642613 update_engine[2783]: I20250517 01:53:11.642602 2783 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:53:11.642613 update_engine[2783]: I20250517 01:53:11.642609 2783 omaha_request_action.cc:617] Omaha request response: May 17 01:53:11.642658 update_engine[2783]: I20250517 01:53:11.642616 2783 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:53:11.642658 update_engine[2783]: I20250517 01:53:11.642620 2783 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:53:11.642658 update_engine[2783]: I20250517 01:53:11.642624 2783 update_attempter.cc:306] Processing Done. May 17 01:53:11.642658 update_engine[2783]: I20250517 01:53:11.642629 2783 update_attempter.cc:310] Error event sent. May 17 01:53:11.642658 update_engine[2783]: I20250517 01:53:11.642637 2783 update_check_scheduler.cc:74] Next update check in 44m7s May 17 01:53:11.642783 locksmithd[2816]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 01:53:16.431027 kubelet[4322]: E0517 01:53:16.430984 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:53:17.431894 kubelet[4322]: E0517 01:53:17.431852 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:53:29.431563 kubelet[4322]: E0517 01:53:29.431292 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:53:31.431429 kubelet[4322]: E0517 01:53:31.431388 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:53:43.431916 kubelet[4322]: E0517 01:53:43.431866 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:53:45.431004 kubelet[4322]: E0517 01:53:45.430939 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:53:57.432015 kubelet[4322]: E0517 01:53:57.431963 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:54:00.431697 kubelet[4322]: E0517 01:54:00.431662 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:54:10.431617 kubelet[4322]: E0517 01:54:10.431562 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:54:14.431524 kubelet[4322]: E0517 01:54:14.431485 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:54:23.432176 containerd[2793]: time="2025-05-17T01:54:23.432086104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:54:23.545744 containerd[2793]: time="2025-05-17T01:54:23.545706972Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:54:23.545997 containerd[2793]: time="2025-05-17T01:54:23.545976732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:54:23.546063 containerd[2793]: time="2025-05-17T01:54:23.546037412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:54:23.546196 kubelet[4322]: E0517 01:54:23.546143 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:54:23.546477 kubelet[4322]: E0517 01:54:23.546205 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:54:23.546477 kubelet[4322]: E0517 01:54:23.546328 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c14844c5913491a84860e1a0b8551a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:54:23.547981 containerd[2793]: time="2025-05-17T01:54:23.547965692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:54:23.573018 containerd[2793]: time="2025-05-17T01:54:23.572978739Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:54:23.573252 containerd[2793]: time="2025-05-17T01:54:23.573229979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:54:23.573318 containerd[2793]: time="2025-05-17T01:54:23.573294379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:54:23.573383 kubelet[4322]: E0517 01:54:23.573358 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:54:23.573433 kubelet[4322]: E0517 01:54:23.573390 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:54:23.573506 kubelet[4322]: E0517 01:54:23.573474 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cc6bcc89-ghtr6_calico-system(26bebf9d-4188-4210-90ef-079cfef2bc0c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:54:23.574648 kubelet[4322]: E0517 01:54:23.574609 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:54:27.431280 containerd[2793]: time="2025-05-17T01:54:27.431246571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:54:27.552096 containerd[2793]: time="2025-05-17T01:54:27.552016121Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:54:27.552342 containerd[2793]: time="2025-05-17T01:54:27.552310721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:54:27.552415 containerd[2793]: time="2025-05-17T01:54:27.552371121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:54:27.552540 kubelet[4322]: E0517 01:54:27.552493 4322 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:54:27.552840 kubelet[4322]: E0517 01:54:27.552548 4322 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:54:27.552840 kubelet[4322]: E0517 01:54:27.552670 4322 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4m2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-dqq55_calico-system(167f036e-0c64-4fc2-a584-1f781d3f336f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:54:27.553852 kubelet[4322]: E0517 01:54:27.553823 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:54:36.431919 kubelet[4322]: E0517 01:54:36.431868 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:54:41.431515 kubelet[4322]: E0517 01:54:41.431477 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:54:50.431559 kubelet[4322]: E0517 01:54:50.431505 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:54:56.430987 kubelet[4322]: E0517 01:54:56.430930 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:55:02.423384 systemd[1]: Started sshd@13-147.28.150.2:22-218.92.0.158:33352.service - OpenSSH per-connection server daemon (218.92.0.158:33352). May 17 01:55:02.432209 kubelet[4322]: E0517 01:55:02.432181 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:55:03.957236 sshd[10493]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:55:05.361711 sshd[10489]: PAM: Permission denied for root from 218.92.0.158 May 17 01:55:05.775163 sshd[10494]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:55:08.122871 sshd[10489]: PAM: Permission denied for root from 218.92.0.158 May 17 01:55:08.538052 sshd[10495]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 01:55:10.298483 sshd[10489]: PAM: Permission denied for root from 218.92.0.158 May 17 01:55:10.430936 kubelet[4322]: E0517 01:55:10.430898 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:55:10.507143 sshd[10489]: Received disconnect from 218.92.0.158 port 33352:11: [preauth] May 17 01:55:10.507143 sshd[10489]: Disconnected from authenticating user root 218.92.0.158 port 33352 [preauth] May 17 01:55:10.509025 systemd[1]: sshd@13-147.28.150.2:22-218.92.0.158:33352.service: Deactivated successfully. May 17 01:55:13.431089 kubelet[4322]: E0517 01:55:13.431022 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:55:21.431419 kubelet[4322]: E0517 01:55:21.431368 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:55:26.431598 kubelet[4322]: E0517 01:55:26.431549 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:55:32.301324 systemd[1]: Started sshd@14-147.28.150.2:22-36.110.172.218:55384.service - OpenSSH per-connection server daemon (36.110.172.218:55384). May 17 01:55:33.471500 sshd[10577]: Invalid user ws from 36.110.172.218 port 55384 May 17 01:55:33.694211 sshd[10577]: Received disconnect from 36.110.172.218 port 55384:11: Bye Bye [preauth] May 17 01:55:33.694211 sshd[10577]: Disconnected from invalid user ws 36.110.172.218 port 55384 [preauth] May 17 01:55:33.696574 systemd[1]: sshd@14-147.28.150.2:22-36.110.172.218:55384.service: Deactivated successfully. May 17 01:55:36.431723 kubelet[4322]: E0517 01:55:36.431671 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:55:41.431921 kubelet[4322]: E0517 01:55:41.431884 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:55:49.431568 kubelet[4322]: E0517 01:55:49.431514 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:55:52.432494 kubelet[4322]: E0517 01:55:52.432432 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:56:01.430908 kubelet[4322]: E0517 01:56:01.430870 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:56:05.431088 kubelet[4322]: E0517 01:56:05.431027 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:56:14.431274 kubelet[4322]: E0517 01:56:14.431239 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:56:15.119331 systemd[1]: Started sshd@15-147.28.150.2:22-147.75.109.163:36802.service - OpenSSH per-connection server daemon (147.75.109.163:36802). May 17 01:56:15.536720 sshd[10688]: Accepted publickey for core from 147.75.109.163 port 36802 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:15.537811 sshd[10688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:15.541083 systemd-logind[2777]: New session 10 of user core. May 17 01:56:15.552256 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 01:56:15.892791 sshd[10688]: pam_unix(sshd:session): session closed for user core May 17 01:56:15.895623 systemd[1]: sshd@15-147.28.150.2:22-147.75.109.163:36802.service: Deactivated successfully. May 17 01:56:15.897405 systemd-logind[2777]: Session 10 logged out. Waiting for processes to exit. May 17 01:56:15.897499 systemd[1]: session-10.scope: Deactivated successfully. May 17 01:56:15.898212 systemd-logind[2777]: Removed session 10. May 17 01:56:17.431790 kubelet[4322]: E0517 01:56:17.431752 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:56:20.965266 systemd[1]: Started sshd@16-147.28.150.2:22-147.75.109.163:55342.service - OpenSSH per-connection server daemon (147.75.109.163:55342). May 17 01:56:21.368134 sshd[10732]: Accepted publickey for core from 147.75.109.163 port 55342 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:21.369327 sshd[10732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:21.372383 systemd-logind[2777]: New session 11 of user core. May 17 01:56:21.381345 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 01:56:21.714922 sshd[10732]: pam_unix(sshd:session): session closed for user core May 17 01:56:21.717781 systemd[1]: sshd@16-147.28.150.2:22-147.75.109.163:55342.service: Deactivated successfully. May 17 01:56:21.719571 systemd-logind[2777]: Session 11 logged out. Waiting for processes to exit. May 17 01:56:21.719659 systemd[1]: session-11.scope: Deactivated successfully. May 17 01:56:21.720366 systemd-logind[2777]: Removed session 11. May 17 01:56:21.785243 systemd[1]: Started sshd@17-147.28.150.2:22-147.75.109.163:55350.service - OpenSSH per-connection server daemon (147.75.109.163:55350). May 17 01:56:22.183061 sshd[10775]: Accepted publickey for core from 147.75.109.163 port 55350 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:22.184106 sshd[10775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:22.186865 systemd-logind[2777]: New session 12 of user core. May 17 01:56:22.196256 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 01:56:22.548280 sshd[10775]: pam_unix(sshd:session): session closed for user core May 17 01:56:22.551096 systemd[1]: sshd@17-147.28.150.2:22-147.75.109.163:55350.service: Deactivated successfully. May 17 01:56:22.552899 systemd-logind[2777]: Session 12 logged out. Waiting for processes to exit. May 17 01:56:22.552994 systemd[1]: session-12.scope: Deactivated successfully. May 17 01:56:22.553709 systemd-logind[2777]: Removed session 12. May 17 01:56:22.623342 systemd[1]: Started sshd@18-147.28.150.2:22-147.75.109.163:55358.service - OpenSSH per-connection server daemon (147.75.109.163:55358). May 17 01:56:23.025807 sshd[10827]: Accepted publickey for core from 147.75.109.163 port 55358 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:23.026869 sshd[10827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:23.029724 systemd-logind[2777]: New session 13 of user core. May 17 01:56:23.044336 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 01:56:23.370855 sshd[10827]: pam_unix(sshd:session): session closed for user core May 17 01:56:23.373670 systemd[1]: sshd@18-147.28.150.2:22-147.75.109.163:55358.service: Deactivated successfully. May 17 01:56:23.375459 systemd-logind[2777]: Session 13 logged out. Waiting for processes to exit. May 17 01:56:23.375553 systemd[1]: session-13.scope: Deactivated successfully. May 17 01:56:23.376235 systemd-logind[2777]: Removed session 13. May 17 01:56:28.431294 kubelet[4322]: E0517 01:56:28.431189 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:56:28.444272 systemd[1]: Started sshd@19-147.28.150.2:22-147.75.109.163:56962.service - OpenSSH per-connection server daemon (147.75.109.163:56962). May 17 01:56:28.845076 sshd[10870]: Accepted publickey for core from 147.75.109.163 port 56962 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:28.846134 sshd[10870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:28.849014 systemd-logind[2777]: New session 14 of user core. May 17 01:56:28.856251 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 01:56:29.189184 sshd[10870]: pam_unix(sshd:session): session closed for user core May 17 01:56:29.191916 systemd[1]: sshd@19-147.28.150.2:22-147.75.109.163:56962.service: Deactivated successfully. May 17 01:56:29.193711 systemd-logind[2777]: Session 14 logged out. Waiting for processes to exit. May 17 01:56:29.193808 systemd[1]: session-14.scope: Deactivated successfully. May 17 01:56:29.194622 systemd-logind[2777]: Removed session 14. May 17 01:56:29.268328 systemd[1]: Started sshd@20-147.28.150.2:22-147.75.109.163:56972.service - OpenSSH per-connection server daemon (147.75.109.163:56972). May 17 01:56:29.670763 sshd[10909]: Accepted publickey for core from 147.75.109.163 port 56972 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:29.671842 sshd[10909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:29.674904 systemd-logind[2777]: New session 15 of user core. May 17 01:56:29.684255 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 01:56:30.133767 sshd[10909]: pam_unix(sshd:session): session closed for user core May 17 01:56:30.136582 systemd[1]: sshd@20-147.28.150.2:22-147.75.109.163:56972.service: Deactivated successfully. May 17 01:56:30.138426 systemd-logind[2777]: Session 15 logged out. Waiting for processes to exit. May 17 01:56:30.138517 systemd[1]: session-15.scope: Deactivated successfully. May 17 01:56:30.139290 systemd-logind[2777]: Removed session 15. May 17 01:56:30.210335 systemd[1]: Started sshd@21-147.28.150.2:22-147.75.109.163:56986.service - OpenSSH per-connection server daemon (147.75.109.163:56986). May 17 01:56:30.431541 kubelet[4322]: E0517 01:56:30.431460 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:56:30.620521 sshd[10976]: Accepted publickey for core from 147.75.109.163 port 56986 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:30.621613 sshd[10976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:30.624634 systemd-logind[2777]: New session 16 of user core. May 17 01:56:30.643342 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 01:56:32.168873 sshd[10976]: pam_unix(sshd:session): session closed for user core May 17 01:56:32.171679 systemd[1]: sshd@21-147.28.150.2:22-147.75.109.163:56986.service: Deactivated successfully. May 17 01:56:32.173497 systemd-logind[2777]: Session 16 logged out. Waiting for processes to exit. May 17 01:56:32.173592 systemd[1]: session-16.scope: Deactivated successfully. May 17 01:56:32.174355 systemd-logind[2777]: Removed session 16. May 17 01:56:32.239343 systemd[1]: Started sshd@22-147.28.150.2:22-147.75.109.163:56990.service - OpenSSH per-connection server daemon (147.75.109.163:56990). May 17 01:56:32.645695 sshd[11082]: Accepted publickey for core from 147.75.109.163 port 56990 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:32.646873 sshd[11082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:32.649996 systemd-logind[2777]: New session 17 of user core. May 17 01:56:32.670369 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 01:56:33.073658 sshd[11082]: pam_unix(sshd:session): session closed for user core May 17 01:56:33.076475 systemd[1]: sshd@22-147.28.150.2:22-147.75.109.163:56990.service: Deactivated successfully. May 17 01:56:33.078272 systemd-logind[2777]: Session 17 logged out. Waiting for processes to exit. May 17 01:56:33.078368 systemd[1]: session-17.scope: Deactivated successfully. May 17 01:56:33.079132 systemd-logind[2777]: Removed session 17. May 17 01:56:33.145241 systemd[1]: Started sshd@23-147.28.150.2:22-147.75.109.163:57006.service - OpenSSH per-connection server daemon (147.75.109.163:57006). May 17 01:56:33.556267 sshd[11128]: Accepted publickey for core from 147.75.109.163 port 57006 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:33.557343 sshd[11128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:33.560443 systemd-logind[2777]: New session 18 of user core. May 17 01:56:33.569364 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 01:56:33.904351 sshd[11128]: pam_unix(sshd:session): session closed for user core May 17 01:56:33.907234 systemd[1]: sshd@23-147.28.150.2:22-147.75.109.163:57006.service: Deactivated successfully. May 17 01:56:33.909051 systemd-logind[2777]: Session 18 logged out. Waiting for processes to exit. May 17 01:56:33.909153 systemd[1]: session-18.scope: Deactivated successfully. May 17 01:56:33.909896 systemd-logind[2777]: Removed session 18. May 17 01:56:38.981344 systemd[1]: Started sshd@24-147.28.150.2:22-147.75.109.163:45560.service - OpenSSH per-connection server daemon (147.75.109.163:45560). May 17 01:56:39.382181 sshd[11169]: Accepted publickey for core from 147.75.109.163 port 45560 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:39.383265 sshd[11169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:39.386118 systemd-logind[2777]: New session 19 of user core. May 17 01:56:39.404341 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 01:56:39.431389 kubelet[4322]: E0517 01:56:39.431355 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f" May 17 01:56:39.726105 sshd[11169]: pam_unix(sshd:session): session closed for user core May 17 01:56:39.728863 systemd[1]: sshd@24-147.28.150.2:22-147.75.109.163:45560.service: Deactivated successfully. May 17 01:56:39.730695 systemd-logind[2777]: Session 19 logged out. Waiting for processes to exit. May 17 01:56:39.730792 systemd[1]: session-19.scope: Deactivated successfully. May 17 01:56:39.731533 systemd-logind[2777]: Removed session 19. May 17 01:56:44.797346 systemd[1]: Started sshd@25-147.28.150.2:22-147.75.109.163:45574.service - OpenSSH per-connection server daemon (147.75.109.163:45574). May 17 01:56:45.196488 sshd[11234]: Accepted publickey for core from 147.75.109.163 port 45574 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:45.197532 sshd[11234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:45.200561 systemd-logind[2777]: New session 20 of user core. May 17 01:56:45.210350 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 01:56:45.431443 kubelet[4322]: E0517 01:56:45.431406 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cc6bcc89-ghtr6" podUID="26bebf9d-4188-4210-90ef-079cfef2bc0c" May 17 01:56:45.539181 sshd[11234]: pam_unix(sshd:session): session closed for user core May 17 01:56:45.541947 systemd[1]: sshd@25-147.28.150.2:22-147.75.109.163:45574.service: Deactivated successfully. May 17 01:56:45.543749 systemd-logind[2777]: Session 20 logged out. Waiting for processes to exit. May 17 01:56:45.543839 systemd[1]: session-20.scope: Deactivated successfully. May 17 01:56:45.544587 systemd-logind[2777]: Removed session 20. May 17 01:56:50.618349 systemd[1]: Started sshd@26-147.28.150.2:22-147.75.109.163:38522.service - OpenSSH per-connection server daemon (147.75.109.163:38522). May 17 01:56:51.027285 sshd[11268]: Accepted publickey for core from 147.75.109.163 port 38522 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 01:56:51.028501 sshd[11268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:56:51.032137 systemd-logind[2777]: New session 21 of user core. May 17 01:56:51.043276 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 01:56:51.369871 sshd[11268]: pam_unix(sshd:session): session closed for user core May 17 01:56:51.372745 systemd[1]: sshd@26-147.28.150.2:22-147.75.109.163:38522.service: Deactivated successfully. May 17 01:56:51.374532 systemd-logind[2777]: Session 21 logged out. Waiting for processes to exit. May 17 01:56:51.374620 systemd[1]: session-21.scope: Deactivated successfully. May 17 01:56:51.375389 systemd-logind[2777]: Removed session 21. May 17 01:56:52.431239 kubelet[4322]: E0517 01:56:52.431204 4322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-dqq55" podUID="167f036e-0c64-4fc2-a584-1f781d3f336f"