May 17 00:12:56.164269 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 00:12:56.164291 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:12:56.164299 kernel: KASLR enabled May 17 00:12:56.164305 kernel: efi: EFI v2.7 by American Megatrends May 17 00:12:56.164311 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf00018 MEMRESERVE=0xe45e8f98 May 17 00:12:56.164317 kernel: random: crng init done May 17 00:12:56.164324 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. May 17 00:12:56.164330 kernel: ACPI: Early table checksum verification disabled May 17 00:12:56.164337 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 00:12:56.164343 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 00:12:56.164349 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 00:12:56.164355 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 00:12:56.164361 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 00:12:56.164368 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 00:12:56.164377 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 00:12:56.164384 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:12:56.164390 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 00:12:56.164397 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:12:56.164403 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 00:12:56.164410 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164416 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164422 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164429 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164436 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 00:12:56.164443 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 00:12:56.164449 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:12:56.164456 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 00:12:56.164462 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 00:12:56.164468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 00:12:56.164475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 00:12:56.164481 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 00:12:56.164488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 00:12:56.164494 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] May 17 00:12:56.164500 kernel: Zone ranges: May 17 00:12:56.164507 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 00:12:56.164514 kernel: DMA32 empty May 17 00:12:56.164521 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 00:12:56.164527 kernel: Movable zone start for each node May 17 00:12:56.164533 kernel: Early memory node ranges May 17 00:12:56.164540 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 00:12:56.164549 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 00:12:56.164556 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 00:12:56.164564 kernel: node 0: [mem 0x0000000094000000-0x00000000eba37fff] May 17 00:12:56.164571 kernel: node 0: [mem 0x00000000eba38000-0x00000000ebeccfff] May 17 00:12:56.164578 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 00:12:56.164584 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 00:12:56.164595 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 00:12:56.164602 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 00:12:56.164608 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 00:12:56.164615 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 00:12:56.164622 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 00:12:56.164628 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 00:12:56.164637 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 00:12:56.164644 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 00:12:56.164650 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 00:12:56.164657 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 00:12:56.164664 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 00:12:56.164670 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 00:12:56.164677 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 00:12:56.164684 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 00:12:56.164691 kernel: psci: probing for conduit method from ACPI. May 17 00:12:56.164697 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:12:56.164704 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:12:56.164712 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:12:56.164719 kernel: psci: SMC Calling Convention v1.2 May 17 00:12:56.164725 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 00:12:56.164732 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 00:12:56.164739 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 00:12:56.164746 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 00:12:56.164752 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 00:12:56.164759 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 00:12:56.164766 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 00:12:56.164773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 00:12:56.164779 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 00:12:56.164786 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 00:12:56.164794 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 00:12:56.164801 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 00:12:56.164808 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 00:12:56.164814 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 00:12:56.164821 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 00:12:56.164828 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 00:12:56.164834 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 00:12:56.164841 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 00:12:56.164848 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 00:12:56.164854 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 00:12:56.164861 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 00:12:56.164868 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 00:12:56.164876 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 00:12:56.164883 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 00:12:56.164889 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 00:12:56.164896 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 00:12:56.164903 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 00:12:56.164909 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 00:12:56.164916 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 00:12:56.164923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 00:12:56.164930 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 00:12:56.164937 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 00:12:56.164943 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 00:12:56.164951 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 00:12:56.164958 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 00:12:56.164965 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 00:12:56.164972 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 00:12:56.164978 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 00:12:56.164985 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 00:12:56.164992 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 00:12:56.164998 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 00:12:56.165005 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 00:12:56.165012 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 00:12:56.165018 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 00:12:56.165026 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 00:12:56.165034 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 00:12:56.165040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 00:12:56.165047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 00:12:56.165054 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 00:12:56.165060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 00:12:56.165067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 00:12:56.165074 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 00:12:56.165081 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 00:12:56.165095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 00:12:56.165102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 00:12:56.165111 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 00:12:56.165118 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 00:12:56.165125 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 00:12:56.165132 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 00:12:56.165139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 00:12:56.165146 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 00:12:56.165154 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 00:12:56.165161 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 00:12:56.165169 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 00:12:56.165176 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 00:12:56.165183 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 00:12:56.165190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 00:12:56.165197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 00:12:56.165204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 00:12:56.165212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 00:12:56.165219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 00:12:56.165226 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 00:12:56.165233 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 00:12:56.165241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 00:12:56.165248 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 00:12:56.165255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 00:12:56.165263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 00:12:56.165270 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 00:12:56.165277 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 00:12:56.165284 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 00:12:56.165291 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:12:56.165299 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:12:56.165306 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 00:12:56.165313 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 00:12:56.165322 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 00:12:56.165329 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 00:12:56.165336 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 00:12:56.165343 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 00:12:56.165350 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 00:12:56.165357 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 00:12:56.165364 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 00:12:56.165371 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 00:12:56.165378 kernel: Detected PIPT I-cache on CPU0 May 17 00:12:56.165385 kernel: CPU features: detected: GIC system register CPU interface May 17 00:12:56.165392 kernel: CPU features: detected: Virtualization Host Extensions May 17 00:12:56.165401 kernel: CPU features: detected: Hardware dirty bit management May 17 00:12:56.165408 kernel: CPU features: detected: Spectre-v4 May 17 00:12:56.165415 kernel: CPU features: detected: Spectre-BHB May 17 00:12:56.165422 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:12:56.165430 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:12:56.165437 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:12:56.165444 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:12:56.165451 kernel: alternatives: applying boot alternatives May 17 00:12:56.165460 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:12:56.165468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:12:56.165476 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 00:12:56.165483 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 00:12:56.165490 kernel: printk: log_buf_len min size: 262144 bytes May 17 00:12:56.165498 kernel: printk: log_buf_len: 1048576 bytes May 17 00:12:56.165505 kernel: printk: early log buf free: 250032(95%) May 17 00:12:56.165512 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 00:12:56.165520 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 00:12:56.165527 kernel: Fallback order for Node 0: 0 May 17 00:12:56.165534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 00:12:56.165541 kernel: Policy zone: Normal May 17 00:12:56.165549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:12:56.165556 kernel: software IO TLB: area num 128. May 17 00:12:56.165565 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 00:12:56.165572 kernel: Memory: 262922452K/268174336K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 5251884K reserved, 0K cma-reserved) May 17 00:12:56.165580 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 00:12:56.165587 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:12:56.165597 kernel: rcu: RCU event tracing is enabled. May 17 00:12:56.165605 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 00:12:56.165612 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:12:56.165619 kernel: Tracing variant of Tasks RCU enabled. May 17 00:12:56.165627 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:12:56.165634 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 00:12:56.165642 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:12:56.165651 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 00:12:56.165658 kernel: GICv3: 672 SPIs implemented May 17 00:12:56.165665 kernel: GICv3: 0 Extended SPIs implemented May 17 00:12:56.165672 kernel: Root IRQ handler: gic_handle_irq May 17 00:12:56.165679 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:12:56.165687 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 00:12:56.165694 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 00:12:56.165701 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 00:12:56.165708 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 00:12:56.165715 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 00:12:56.165722 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 00:12:56.165729 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 00:12:56.165736 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 00:12:56.165745 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 00:12:56.165752 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 00:12:56.165760 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165767 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165774 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 00:12:56.165782 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165789 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165796 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 00:12:56.165804 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165811 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165818 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 00:12:56.165827 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165835 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165842 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 00:12:56.165849 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165856 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165864 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 00:12:56.165871 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165878 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165886 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 00:12:56.165893 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165900 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165909 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 00:12:56.165916 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165923 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165931 kernel: GICv3: using LPI property table @0x00000800003e0000 May 17 00:12:56.165938 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 17 00:12:56.165945 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:12:56.165952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.165960 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 00:12:56.165967 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 00:12:56.165974 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:12:56.165982 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:12:56.165990 kernel: Console: colour dummy device 80x25 May 17 00:12:56.165998 kernel: printk: console [tty0] enabled May 17 00:12:56.166005 kernel: ACPI: Core revision 20230628 May 17 00:12:56.166013 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:12:56.166020 kernel: pid_max: default: 81920 minimum: 640 May 17 00:12:56.166028 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:12:56.166035 kernel: landlock: Up and running. May 17 00:12:56.166042 kernel: SELinux: Initializing. May 17 00:12:56.166050 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:56.166059 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:56.166066 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:12:56.166074 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:12:56.166081 kernel: rcu: Hierarchical SRCU implementation. May 17 00:12:56.166089 kernel: rcu: Max phase no-delay instances is 400. May 17 00:12:56.166096 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 00:12:56.166103 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 00:12:56.166110 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 00:12:56.166118 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 00:12:56.166126 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 00:12:56.166133 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 00:12:56.166140 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 00:12:56.166148 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 00:12:56.166155 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 00:12:56.166162 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 00:12:56.166169 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 00:12:56.166176 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 00:12:56.166184 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 00:12:56.166191 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 00:12:56.166199 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 00:12:56.166206 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 00:12:56.166214 kernel: Remapping and enabling EFI services. May 17 00:12:56.166221 kernel: smp: Bringing up secondary CPUs ... May 17 00:12:56.166228 kernel: Detected PIPT I-cache on CPU1 May 17 00:12:56.166235 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 00:12:56.166243 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 17 00:12:56.166250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166257 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 00:12:56.166266 kernel: Detected PIPT I-cache on CPU2 May 17 00:12:56.166274 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 00:12:56.166281 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 17 00:12:56.166288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166295 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 00:12:56.166303 kernel: Detected PIPT I-cache on CPU3 May 17 00:12:56.166310 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 00:12:56.166317 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 17 00:12:56.166325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166332 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 00:12:56.166340 kernel: Detected PIPT I-cache on CPU4 May 17 00:12:56.166347 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 00:12:56.166355 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 17 00:12:56.166362 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166369 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 00:12:56.166376 kernel: Detected PIPT I-cache on CPU5 May 17 00:12:56.166384 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 00:12:56.166391 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 17 00:12:56.166398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166407 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 00:12:56.166414 kernel: Detected PIPT I-cache on CPU6 May 17 00:12:56.166421 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 00:12:56.166429 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 17 00:12:56.166436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166443 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 00:12:56.166450 kernel: Detected PIPT I-cache on CPU7 May 17 00:12:56.166458 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 00:12:56.166465 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 17 00:12:56.166474 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166481 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 00:12:56.166488 kernel: Detected PIPT I-cache on CPU8 May 17 00:12:56.166496 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 00:12:56.166503 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 17 00:12:56.166510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166517 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 00:12:56.166524 kernel: Detected PIPT I-cache on CPU9 May 17 00:12:56.166532 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 00:12:56.166539 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 17 00:12:56.166547 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166554 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 00:12:56.166561 kernel: Detected PIPT I-cache on CPU10 May 17 00:12:56.166569 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 00:12:56.166576 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 17 00:12:56.166583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166593 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 00:12:56.166600 kernel: Detected PIPT I-cache on CPU11 May 17 00:12:56.166608 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 00:12:56.166615 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 17 00:12:56.166624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166631 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 00:12:56.166638 kernel: Detected PIPT I-cache on CPU12 May 17 00:12:56.166646 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 00:12:56.166653 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 17 00:12:56.166660 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166667 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 00:12:56.166675 kernel: Detected PIPT I-cache on CPU13 May 17 00:12:56.166682 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 00:12:56.166691 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 17 00:12:56.166698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166705 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 00:12:56.166713 kernel: Detected PIPT I-cache on CPU14 May 17 00:12:56.166720 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 00:12:56.166727 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 17 00:12:56.166735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166742 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 00:12:56.166749 kernel: Detected PIPT I-cache on CPU15 May 17 00:12:56.166758 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 00:12:56.166765 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 17 00:12:56.166772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166780 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 00:12:56.166787 kernel: Detected PIPT I-cache on CPU16 May 17 00:12:56.166794 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 00:12:56.166802 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 17 00:12:56.166809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166816 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 00:12:56.166824 kernel: Detected PIPT I-cache on CPU17 May 17 00:12:56.166841 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 00:12:56.166850 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 17 00:12:56.166857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166865 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 00:12:56.166873 kernel: Detected PIPT I-cache on CPU18 May 17 00:12:56.166880 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 00:12:56.166888 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 17 00:12:56.166896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166903 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 00:12:56.166912 kernel: Detected PIPT I-cache on CPU19 May 17 00:12:56.166920 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 00:12:56.166929 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 17 00:12:56.166936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166944 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 00:12:56.166951 kernel: Detected PIPT I-cache on CPU20 May 17 00:12:56.166959 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 00:12:56.166968 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 17 00:12:56.166977 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166985 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 00:12:56.166992 kernel: Detected PIPT I-cache on CPU21 May 17 00:12:56.167000 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 00:12:56.167008 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 17 00:12:56.167015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167023 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 00:12:56.167032 kernel: Detected PIPT I-cache on CPU22 May 17 00:12:56.167040 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 00:12:56.167047 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 17 00:12:56.167055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167063 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 00:12:56.167070 kernel: Detected PIPT I-cache on CPU23 May 17 00:12:56.167078 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 00:12:56.167085 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 17 00:12:56.167093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167101 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 00:12:56.167110 kernel: Detected PIPT I-cache on CPU24 May 17 00:12:56.167118 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 00:12:56.167125 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 17 00:12:56.167133 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167141 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 00:12:56.167148 kernel: Detected PIPT I-cache on CPU25 May 17 00:12:56.167156 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 00:12:56.167164 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 17 00:12:56.167171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167180 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 00:12:56.167188 kernel: Detected PIPT I-cache on CPU26 May 17 00:12:56.167196 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 00:12:56.167203 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 17 00:12:56.167211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167219 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 00:12:56.167226 kernel: Detected PIPT I-cache on CPU27 May 17 00:12:56.167234 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 00:12:56.167242 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 17 00:12:56.167249 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167258 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 00:12:56.167266 kernel: Detected PIPT I-cache on CPU28 May 17 00:12:56.167273 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 00:12:56.167281 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 17 00:12:56.167289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167296 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 00:12:56.167304 kernel: Detected PIPT I-cache on CPU29 May 17 00:12:56.167312 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 00:12:56.167319 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 17 00:12:56.167328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167336 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 00:12:56.167343 kernel: Detected PIPT I-cache on CPU30 May 17 00:12:56.167351 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 00:12:56.167359 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 17 00:12:56.167366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167374 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 00:12:56.167382 kernel: Detected PIPT I-cache on CPU31 May 17 00:12:56.167389 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 00:12:56.167397 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 17 00:12:56.167406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167414 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 00:12:56.167421 kernel: Detected PIPT I-cache on CPU32 May 17 00:12:56.167429 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 00:12:56.167437 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 17 00:12:56.167444 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167452 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 00:12:56.167459 kernel: Detected PIPT I-cache on CPU33 May 17 00:12:56.167467 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 00:12:56.167476 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 17 00:12:56.167484 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167492 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 00:12:56.167501 kernel: Detected PIPT I-cache on CPU34 May 17 00:12:56.167508 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 00:12:56.167516 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 17 00:12:56.167524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167531 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 00:12:56.167539 kernel: Detected PIPT I-cache on CPU35 May 17 00:12:56.167547 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 00:12:56.167556 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 17 00:12:56.167564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167571 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 00:12:56.167579 kernel: Detected PIPT I-cache on CPU36 May 17 00:12:56.167586 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 00:12:56.167596 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 17 00:12:56.167604 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167612 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 00:12:56.167619 kernel: Detected PIPT I-cache on CPU37 May 17 00:12:56.167629 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 00:12:56.167636 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 17 00:12:56.167644 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167652 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 00:12:56.167659 kernel: Detected PIPT I-cache on CPU38 May 17 00:12:56.167667 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 00:12:56.167675 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 17 00:12:56.167682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167690 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 00:12:56.167699 kernel: Detected PIPT I-cache on CPU39 May 17 00:12:56.167707 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 00:12:56.167715 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 17 00:12:56.167722 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167730 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 00:12:56.167738 kernel: Detected PIPT I-cache on CPU40 May 17 00:12:56.167745 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 00:12:56.167753 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 17 00:12:56.167762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167770 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 00:12:56.167777 kernel: Detected PIPT I-cache on CPU41 May 17 00:12:56.167785 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 00:12:56.167793 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 17 00:12:56.167801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167808 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 00:12:56.167816 kernel: Detected PIPT I-cache on CPU42 May 17 00:12:56.167824 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 00:12:56.167832 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 17 00:12:56.167841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167848 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 00:12:56.167856 kernel: Detected PIPT I-cache on CPU43 May 17 00:12:56.167864 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 00:12:56.167872 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 17 00:12:56.167879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167887 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 00:12:56.167894 kernel: Detected PIPT I-cache on CPU44 May 17 00:12:56.167902 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 00:12:56.167911 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 17 00:12:56.167919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167927 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 00:12:56.167934 kernel: Detected PIPT I-cache on CPU45 May 17 00:12:56.167942 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 00:12:56.167950 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 17 00:12:56.167957 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167967 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 00:12:56.167975 kernel: Detected PIPT I-cache on CPU46 May 17 00:12:56.167982 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 00:12:56.167992 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 17 00:12:56.167999 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168007 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 00:12:56.168015 kernel: Detected PIPT I-cache on CPU47 May 17 00:12:56.168022 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 00:12:56.168030 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 17 00:12:56.168038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168045 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 00:12:56.168053 kernel: Detected PIPT I-cache on CPU48 May 17 00:12:56.168062 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 00:12:56.168070 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 17 00:12:56.168077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168085 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 00:12:56.168092 kernel: Detected PIPT I-cache on CPU49 May 17 00:12:56.168100 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 00:12:56.168108 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 17 00:12:56.168116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168123 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 00:12:56.168131 kernel: Detected PIPT I-cache on CPU50 May 17 00:12:56.168140 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 00:12:56.168148 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 17 00:12:56.168155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168163 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 00:12:56.168171 kernel: Detected PIPT I-cache on CPU51 May 17 00:12:56.168178 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 00:12:56.168186 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 17 00:12:56.168194 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168201 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 00:12:56.168210 kernel: Detected PIPT I-cache on CPU52 May 17 00:12:56.168218 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 00:12:56.168226 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 17 00:12:56.168235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168242 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 00:12:56.168250 kernel: Detected PIPT I-cache on CPU53 May 17 00:12:56.168258 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 00:12:56.168266 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 17 00:12:56.168273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168281 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 00:12:56.168290 kernel: Detected PIPT I-cache on CPU54 May 17 00:12:56.168298 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 00:12:56.168306 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 17 00:12:56.168313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168321 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 00:12:56.168329 kernel: Detected PIPT I-cache on CPU55 May 17 00:12:56.168336 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 00:12:56.168344 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 17 00:12:56.168352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168361 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 00:12:56.168368 kernel: Detected PIPT I-cache on CPU56 May 17 00:12:56.168376 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 00:12:56.168384 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 17 00:12:56.168392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168399 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 00:12:56.168407 kernel: Detected PIPT I-cache on CPU57 May 17 00:12:56.168415 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 00:12:56.168423 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 17 00:12:56.168431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168439 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 00:12:56.168447 kernel: Detected PIPT I-cache on CPU58 May 17 00:12:56.168454 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 00:12:56.168462 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 17 00:12:56.168470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168478 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 00:12:56.168486 kernel: Detected PIPT I-cache on CPU59 May 17 00:12:56.168493 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 00:12:56.168501 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 17 00:12:56.168510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168518 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 00:12:56.168526 kernel: Detected PIPT I-cache on CPU60 May 17 00:12:56.168534 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 00:12:56.168541 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 17 00:12:56.168549 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168557 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 00:12:56.168564 kernel: Detected PIPT I-cache on CPU61 May 17 00:12:56.168572 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 00:12:56.168581 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 17 00:12:56.168591 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168599 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 00:12:56.168606 kernel: Detected PIPT I-cache on CPU62 May 17 00:12:56.168614 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 00:12:56.168622 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 17 00:12:56.168630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168637 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 00:12:56.168645 kernel: Detected PIPT I-cache on CPU63 May 17 00:12:56.168653 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 00:12:56.168662 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 17 00:12:56.168670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168678 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 00:12:56.168686 kernel: Detected PIPT I-cache on CPU64 May 17 00:12:56.168693 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 00:12:56.168701 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 17 00:12:56.168709 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168716 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 00:12:56.168724 kernel: Detected PIPT I-cache on CPU65 May 17 00:12:56.168733 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 00:12:56.168740 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 17 00:12:56.168748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168756 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 00:12:56.168763 kernel: Detected PIPT I-cache on CPU66 May 17 00:12:56.168771 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 00:12:56.168779 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 17 00:12:56.168786 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168794 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 00:12:56.168802 kernel: Detected PIPT I-cache on CPU67 May 17 00:12:56.168811 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 00:12:56.168819 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 17 00:12:56.168826 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168834 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 00:12:56.168842 kernel: Detected PIPT I-cache on CPU68 May 17 00:12:56.168850 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 00:12:56.168857 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 17 00:12:56.168865 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168873 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 00:12:56.168882 kernel: Detected PIPT I-cache on CPU69 May 17 00:12:56.168890 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 00:12:56.168897 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 17 00:12:56.168905 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168913 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 00:12:56.168921 kernel: Detected PIPT I-cache on CPU70 May 17 00:12:56.168929 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 00:12:56.168936 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 17 00:12:56.168944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168952 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 00:12:56.168961 kernel: Detected PIPT I-cache on CPU71 May 17 00:12:56.168968 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 00:12:56.168976 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 17 00:12:56.168984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168991 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 00:12:56.168999 kernel: Detected PIPT I-cache on CPU72 May 17 00:12:56.169007 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 00:12:56.169014 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 17 00:12:56.169022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169031 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 00:12:56.169039 kernel: Detected PIPT I-cache on CPU73 May 17 00:12:56.169047 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 00:12:56.169054 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 17 00:12:56.169062 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169070 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 00:12:56.169077 kernel: Detected PIPT I-cache on CPU74 May 17 00:12:56.169085 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 00:12:56.169093 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 17 00:12:56.169102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169110 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 00:12:56.169117 kernel: Detected PIPT I-cache on CPU75 May 17 00:12:56.169125 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 00:12:56.169133 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 17 00:12:56.169140 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169148 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 00:12:56.169156 kernel: Detected PIPT I-cache on CPU76 May 17 00:12:56.169163 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 00:12:56.169171 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 17 00:12:56.169180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169188 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 00:12:56.169195 kernel: Detected PIPT I-cache on CPU77 May 17 00:12:56.169203 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 00:12:56.169211 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 17 00:12:56.169218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169226 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 00:12:56.169234 kernel: Detected PIPT I-cache on CPU78 May 17 00:12:56.169241 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 00:12:56.169250 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 17 00:12:56.169258 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169265 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 00:12:56.169273 kernel: Detected PIPT I-cache on CPU79 May 17 00:12:56.169281 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 00:12:56.169288 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 17 00:12:56.169296 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169304 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 00:12:56.169312 kernel: smp: Brought up 1 node, 80 CPUs May 17 00:12:56.169319 kernel: SMP: Total of 80 processors activated. May 17 00:12:56.169328 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:12:56.169336 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:12:56.169344 kernel: CPU features: detected: Common not Private translations May 17 00:12:56.169351 kernel: CPU features: detected: CRC32 instructions May 17 00:12:56.169359 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:12:56.169367 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:12:56.169375 kernel: CPU features: detected: LSE atomic instructions May 17 00:12:56.169382 kernel: CPU features: detected: Privileged Access Never May 17 00:12:56.169390 kernel: CPU features: detected: RAS Extension Support May 17 00:12:56.169399 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:12:56.169406 kernel: CPU: All CPU(s) started at EL2 May 17 00:12:56.169414 kernel: alternatives: applying system-wide alternatives May 17 00:12:56.169422 kernel: devtmpfs: initialized May 17 00:12:56.169430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:12:56.169437 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 00:12:56.169445 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:12:56.169453 kernel: SMBIOS 3.4.0 present. May 17 00:12:56.169461 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 00:12:56.169470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:12:56.169477 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 00:12:56.169485 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:12:56.169493 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:12:56.169501 kernel: audit: initializing netlink subsys (disabled) May 17 00:12:56.169509 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 17 00:12:56.169516 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:12:56.169524 kernel: cpuidle: using governor menu May 17 00:12:56.169532 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:12:56.169540 kernel: ASID allocator initialised with 32768 entries May 17 00:12:56.169548 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:12:56.169556 kernel: Serial: AMBA PL011 UART driver May 17 00:12:56.169564 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:12:56.169572 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:12:56.169579 kernel: Modules: 509024 pages in range for PLT usage May 17 00:12:56.169589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:12:56.169597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:12:56.169605 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:12:56.169614 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:12:56.169622 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:12:56.169629 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:12:56.169637 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:12:56.169645 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:12:56.169652 kernel: ACPI: Added _OSI(Module Device) May 17 00:12:56.169660 kernel: ACPI: Added _OSI(Processor Device) May 17 00:12:56.169667 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:12:56.169675 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:12:56.169684 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 00:12:56.169692 kernel: ACPI: Interpreter enabled May 17 00:12:56.169699 kernel: ACPI: Using GIC for interrupt routing May 17 00:12:56.169707 kernel: ACPI: MCFG table detected, 8 entries May 17 00:12:56.169715 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169723 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169730 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169738 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169746 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169755 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169763 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169771 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169779 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 00:12:56.169787 kernel: printk: console [ttyAMA0] enabled May 17 00:12:56.169794 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 00:12:56.169802 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 00:12:56.169930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.170002 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.170066 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.170127 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.170189 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.170250 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 00:12:56.170261 kernel: PCI host bridge to bus 000d:00 May 17 00:12:56.170331 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 00:12:56.170392 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 00:12:56.170448 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 00:12:56.170526 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.170604 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.170672 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 00:12:56.170736 kernel: pci 000d:00:01.0: supports D1 D2 May 17 00:12:56.170804 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.170877 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.170942 kernel: pci 000d:00:02.0: supports D1 D2 May 17 00:12:56.171006 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.171076 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.171140 kernel: pci 000d:00:03.0: supports D1 D2 May 17 00:12:56.171204 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.171277 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.171344 kernel: pci 000d:00:04.0: supports D1 D2 May 17 00:12:56.171408 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.171418 kernel: acpiphp: Slot [1] registered May 17 00:12:56.171426 kernel: acpiphp: Slot [2] registered May 17 00:12:56.171434 kernel: acpiphp: Slot [3] registered May 17 00:12:56.171441 kernel: acpiphp: Slot [4] registered May 17 00:12:56.171502 kernel: pci_bus 000d:00: on NUMA node 0 May 17 00:12:56.171568 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.171665 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.171729 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.171793 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.171855 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.171917 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.171982 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.172047 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.172111 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.172176 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.172241 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.172303 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.172368 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 00:12:56.172434 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:12:56.172498 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 00:12:56.172561 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:12:56.172628 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 00:12:56.172691 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:12:56.172754 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 00:12:56.172816 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:12:56.172880 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.172945 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173010 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173073 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173137 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173201 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173265 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173329 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173393 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173456 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173518 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173582 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173650 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173714 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173777 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173841 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173903 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.173970 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 00:12:56.174033 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:12:56.174097 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.174160 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 00:12:56.174223 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:12:56.174287 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.174352 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 00:12:56.174416 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:12:56.174479 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.174542 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 00:12:56.174609 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:12:56.174668 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 00:12:56.174726 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 00:12:56.174797 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 00:12:56.174856 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:12:56.174924 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 00:12:56.174983 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:12:56.175058 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 00:12:56.175119 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:12:56.175184 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 00:12:56.175242 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:12:56.175252 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 00:12:56.175322 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.175388 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.175448 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.175511 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.175571 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 00:12:56.175637 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 00:12:56.175647 kernel: PCI host bridge to bus 0000:00 May 17 00:12:56.175711 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 00:12:56.175768 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:12:56.175823 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:12:56.175898 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.175969 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.176033 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 00:12:56.176096 kernel: pci 0000:00:01.0: supports D1 D2 May 17 00:12:56.176159 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176231 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.176297 kernel: pci 0000:00:02.0: supports D1 D2 May 17 00:12:56.176363 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176432 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.176496 kernel: pci 0000:00:03.0: supports D1 D2 May 17 00:12:56.176558 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176632 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.176695 kernel: pci 0000:00:04.0: supports D1 D2 May 17 00:12:56.176761 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176771 kernel: acpiphp: Slot [1-1] registered May 17 00:12:56.176779 kernel: acpiphp: Slot [2-1] registered May 17 00:12:56.176787 kernel: acpiphp: Slot [3-1] registered May 17 00:12:56.176794 kernel: acpiphp: Slot [4-1] registered May 17 00:12:56.176849 kernel: pci_bus 0000:00: on NUMA node 0 May 17 00:12:56.176912 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.176974 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.177038 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.177104 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.177167 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.177229 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.177292 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.177356 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.177419 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.177485 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.177548 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.177614 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.177677 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 00:12:56.177742 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:12:56.177805 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 00:12:56.177868 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:12:56.177931 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 00:12:56.177996 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:12:56.178059 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 00:12:56.178123 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:12:56.178185 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178247 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178310 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178372 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178437 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178499 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178562 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178628 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178691 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178753 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178817 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178879 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178943 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.179007 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.179071 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.179133 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.179197 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.179259 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 00:12:56.179322 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:12:56.179386 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.179448 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 00:12:56.179514 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:12:56.179577 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.179643 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 00:12:56.179710 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:12:56.179775 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.179837 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 00:12:56.179901 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:12:56.179958 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 00:12:56.180015 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:12:56.180084 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 00:12:56.180143 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:12:56.180208 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 00:12:56.180267 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:12:56.180340 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 00:12:56.180400 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:12:56.180467 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 00:12:56.180525 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:12:56.180535 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 00:12:56.180607 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.180669 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.180731 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.180792 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.180856 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 00:12:56.180916 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 00:12:56.180926 kernel: PCI host bridge to bus 0005:00 May 17 00:12:56.180989 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 00:12:56.181045 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:12:56.181101 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 00:12:56.181169 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.181243 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.181307 kernel: pci 0005:00:01.0: supports D1 D2 May 17 00:12:56.181371 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181440 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.181503 kernel: pci 0005:00:03.0: supports D1 D2 May 17 00:12:56.181566 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181640 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.181704 kernel: pci 0005:00:05.0: supports D1 D2 May 17 00:12:56.181767 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181838 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:12:56.181905 kernel: pci 0005:00:07.0: supports D1 D2 May 17 00:12:56.181970 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181980 kernel: acpiphp: Slot [1-2] registered May 17 00:12:56.181987 kernel: acpiphp: Slot [2-2] registered May 17 00:12:56.182061 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 00:12:56.182127 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 00:12:56.182193 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 00:12:56.182268 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 00:12:56.182336 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 00:12:56.182401 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 00:12:56.182459 kernel: pci_bus 0005:00: on NUMA node 0 May 17 00:12:56.182525 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.182592 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.182655 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.182720 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.182783 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.182847 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.182912 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.182978 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.183040 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:12:56.183104 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.183183 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.183250 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 00:12:56.183315 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 00:12:56.183380 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:12:56.183444 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 00:12:56.183506 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:12:56.183576 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 00:12:56.183645 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:12:56.183709 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 00:12:56.183772 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:12:56.183836 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.183905 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.183969 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184031 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184095 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184159 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184221 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184285 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184347 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184412 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184474 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184537 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184603 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184666 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184729 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184793 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184855 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.184919 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 00:12:56.184985 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:12:56.185049 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 00:12:56.185112 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 00:12:56.185176 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:12:56.185244 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 00:12:56.185309 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 00:12:56.185376 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 00:12:56.185438 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 00:12:56.185502 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:12:56.185568 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 00:12:56.185637 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 00:12:56.185713 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 00:12:56.185776 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 00:12:56.185842 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:12:56.185902 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 00:12:56.185957 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:12:56.186026 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 00:12:56.186086 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:12:56.186160 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 00:12:56.186223 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:12:56.186288 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 00:12:56.186346 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:12:56.186411 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 00:12:56.186470 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:12:56.186480 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 00:12:56.186550 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.186617 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.186678 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.186743 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.186803 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.186864 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 00:12:56.186874 kernel: PCI host bridge to bus 0003:00 May 17 00:12:56.186940 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 00:12:56.186997 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 00:12:56.187053 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 00:12:56.187123 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.187195 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.187259 kernel: pci 0003:00:01.0: supports D1 D2 May 17 00:12:56.187325 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.187398 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.187462 kernel: pci 0003:00:03.0: supports D1 D2 May 17 00:12:56.187525 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.187600 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.187664 kernel: pci 0003:00:05.0: supports D1 D2 May 17 00:12:56.187728 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.187741 kernel: acpiphp: Slot [1-3] registered May 17 00:12:56.187749 kernel: acpiphp: Slot [2-3] registered May 17 00:12:56.187823 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 00:12:56.187893 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 00:12:56.187961 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 00:12:56.188031 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 00:12:56.188097 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:12:56.188165 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 00:12:56.188233 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:12:56.188298 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 00:12:56.188363 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:12:56.188428 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 00:12:56.188500 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 00:12:56.188565 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 00:12:56.188637 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 00:12:56.188704 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 00:12:56.188769 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 00:12:56.188834 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 00:12:56.188901 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:12:56.188966 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 00:12:56.189030 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:12:56.189088 kernel: pci_bus 0003:00: on NUMA node 0 May 17 00:12:56.189153 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.189218 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.189280 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.189346 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.189408 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.189471 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.189534 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 00:12:56.189643 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 00:12:56.189708 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:12:56.189770 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:12:56.189831 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:12:56.189892 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:12:56.189966 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:12:56.190029 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:12:56.190091 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190155 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190217 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190278 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190339 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190400 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190462 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190524 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190586 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190654 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190717 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190778 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190841 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.190902 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:12:56.190964 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:12:56.191028 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 00:12:56.191090 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 00:12:56.191155 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:12:56.191221 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 00:12:56.191288 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 00:12:56.191354 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 00:12:56.191420 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 00:12:56.191485 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 00:12:56.191554 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 00:12:56.191622 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 00:12:56.191688 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 00:12:56.191753 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:12:56.191819 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.191883 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:12:56.191952 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.192018 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:12:56.192083 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.192149 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:12:56.192213 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.192277 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 00:12:56.192340 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 00:12:56.192407 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:12:56.192467 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:12:56.192524 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 00:12:56.192580 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 00:12:56.192659 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:12:56.192718 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:12:56.192787 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:12:56.192847 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:12:56.192913 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:12:56.192973 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:12:56.192984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 00:12:56.193052 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.193115 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.193176 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.193239 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.193300 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.193360 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 00:12:56.193371 kernel: PCI host bridge to bus 000c:00 May 17 00:12:56.193433 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 00:12:56.193490 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 00:12:56.193546 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 00:12:56.193622 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.193694 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.193758 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 00:12:56.193823 kernel: pci 000c:00:01.0: supports D1 D2 May 17 00:12:56.193885 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.193956 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.194019 kernel: pci 000c:00:02.0: supports D1 D2 May 17 00:12:56.194085 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.194157 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.194222 kernel: pci 000c:00:03.0: supports D1 D2 May 17 00:12:56.194284 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.194354 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.194417 kernel: pci 000c:00:04.0: supports D1 D2 May 17 00:12:56.194481 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.194493 kernel: acpiphp: Slot [1-4] registered May 17 00:12:56.194502 kernel: acpiphp: Slot [2-4] registered May 17 00:12:56.194510 kernel: acpiphp: Slot [3-2] registered May 17 00:12:56.194518 kernel: acpiphp: Slot [4-2] registered May 17 00:12:56.194573 kernel: pci_bus 000c:00: on NUMA node 0 May 17 00:12:56.194643 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.194706 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.194773 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.194840 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.194906 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.194970 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.195035 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.195098 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.195161 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.195226 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.195291 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.195355 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.195418 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 00:12:56.195482 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:12:56.195545 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 00:12:56.195831 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:12:56.195901 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 00:12:56.195967 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:12:56.196029 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 00:12:56.196090 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:12:56.196152 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196213 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196275 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196337 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196399 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196462 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196525 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196586 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196652 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196713 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196775 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196836 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196898 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196960 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.197024 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.197086 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.197148 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.197210 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 00:12:56.197271 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:12:56.197333 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.197394 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 00:12:56.197459 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:12:56.197521 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.197583 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 00:12:56.197648 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:12:56.197711 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.197772 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 00:12:56.197837 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:12:56.197894 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 00:12:56.197949 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 00:12:56.198015 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 00:12:56.198073 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:12:56.198145 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 00:12:56.198204 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:12:56.198268 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 00:12:56.198325 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:12:56.198390 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 00:12:56.198447 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:12:56.198457 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 00:12:56.198526 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.198591 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.198652 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.198711 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.198771 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.198830 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 00:12:56.198841 kernel: PCI host bridge to bus 0002:00 May 17 00:12:56.198904 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 00:12:56.198962 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 00:12:56.199017 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 00:12:56.199086 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.199156 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.199219 kernel: pci 0002:00:01.0: supports D1 D2 May 17 00:12:56.199281 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199350 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.199415 kernel: pci 0002:00:03.0: supports D1 D2 May 17 00:12:56.199477 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199545 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.199611 kernel: pci 0002:00:05.0: supports D1 D2 May 17 00:12:56.199673 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199742 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:12:56.199806 kernel: pci 0002:00:07.0: supports D1 D2 May 17 00:12:56.199869 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199879 kernel: acpiphp: Slot [1-5] registered May 17 00:12:56.199887 kernel: acpiphp: Slot [2-5] registered May 17 00:12:56.199895 kernel: acpiphp: Slot [3-3] registered May 17 00:12:56.199903 kernel: acpiphp: Slot [4-3] registered May 17 00:12:56.199956 kernel: pci_bus 0002:00: on NUMA node 0 May 17 00:12:56.200019 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.200082 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.200146 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.200210 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.200273 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.200335 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.200399 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.200462 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.200524 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.200587 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.200651 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.200714 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.200776 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 00:12:56.200840 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:12:56.200902 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 00:12:56.200965 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:12:56.201027 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 00:12:56.201089 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:12:56.201155 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 00:12:56.201218 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:12:56.201283 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201346 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201409 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201471 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201535 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201600 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201663 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201725 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201791 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201854 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201916 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201978 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.202040 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.202102 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.202163 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.202225 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.202286 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.202348 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 00:12:56.202412 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:12:56.202475 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 00:12:56.202544 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 00:12:56.202612 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:12:56.202674 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 00:12:56.202736 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 00:12:56.202805 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:12:56.202868 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 00:12:56.202930 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 00:12:56.202993 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:12:56.203051 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 00:12:56.203107 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 00:12:56.203176 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 00:12:56.203234 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:12:56.203299 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 00:12:56.203357 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:12:56.203429 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 00:12:56.203487 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:12:56.203554 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 00:12:56.203711 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:12:56.203724 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 00:12:56.203801 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.203862 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.203922 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.203980 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.204042 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 00:12:56.204101 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 00:12:56.204111 kernel: PCI host bridge to bus 0001:00 May 17 00:12:56.204173 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 00:12:56.204228 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 00:12:56.204282 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 00:12:56.204353 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.204422 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.204485 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 00:12:56.204547 kernel: pci 0001:00:01.0: supports D1 D2 May 17 00:12:56.204613 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.204684 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.204746 kernel: pci 0001:00:02.0: supports D1 D2 May 17 00:12:56.204810 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.204880 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.204943 kernel: pci 0001:00:03.0: supports D1 D2 May 17 00:12:56.205004 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.205074 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.205137 kernel: pci 0001:00:04.0: supports D1 D2 May 17 00:12:56.205202 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.205212 kernel: acpiphp: Slot [1-6] registered May 17 00:12:56.205282 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:12:56.205347 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:12:56.205412 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 00:12:56.205476 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 00:12:56.205541 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:12:56.205618 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:12:56.205688 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:12:56.205752 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 00:12:56.205816 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 00:12:56.205827 kernel: acpiphp: Slot [2-6] registered May 17 00:12:56.205835 kernel: acpiphp: Slot [3-4] registered May 17 00:12:56.205844 kernel: acpiphp: Slot [4-4] registered May 17 00:12:56.205898 kernel: pci_bus 0001:00: on NUMA node 0 May 17 00:12:56.205961 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.206026 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.206089 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.206164 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.206229 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.206291 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.206353 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.206417 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.206481 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.206544 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.206610 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:12:56.206673 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 00:12:56.206735 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 00:12:56.206797 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:12:56.206860 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 00:12:56.206925 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:12:56.206987 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 00:12:56.207049 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:12:56.207112 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207174 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207236 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207298 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207362 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207425 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207488 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207549 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207724 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207789 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207851 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207912 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207974 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.208038 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.208101 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.208162 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.208227 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:12:56.208292 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:12:56.208356 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 00:12:56.208420 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 00:12:56.208482 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.208546 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 00:12:56.208611 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:12:56.208674 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.208735 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 00:12:56.208798 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:12:56.208860 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.208924 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 00:12:56.208986 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:12:56.209049 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.209110 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 00:12:56.209172 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:12:56.209229 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 00:12:56.209286 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 00:12:56.209360 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 00:12:56.209418 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:12:56.209483 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 00:12:56.209542 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:12:56.209609 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 00:12:56.209667 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:12:56.209734 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 00:12:56.209791 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:12:56.209802 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 00:12:56.209870 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.209931 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.209991 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.210053 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.210113 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 00:12:56.210172 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 00:12:56.210183 kernel: PCI host bridge to bus 0004:00 May 17 00:12:56.210246 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 00:12:56.210300 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 00:12:56.210356 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 00:12:56.210426 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.210497 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.210560 kernel: pci 0004:00:01.0: supports D1 D2 May 17 00:12:56.210627 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.210696 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.210760 kernel: pci 0004:00:03.0: supports D1 D2 May 17 00:12:56.210823 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.210893 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.210957 kernel: pci 0004:00:05.0: supports D1 D2 May 17 00:12:56.211018 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.211089 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:12:56.211154 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 00:12:56.211218 kernel: pci 0004:01:00.0: supports D1 D2 May 17 00:12:56.211283 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:12:56.211361 kernel: pci_bus 0004:02: extended config space not accessible May 17 00:12:56.211435 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:12:56.211501 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 00:12:56.211568 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 00:12:56.211638 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 00:12:56.211704 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 00:12:56.211771 kernel: pci 0004:02:00.0: supports D1 D2 May 17 00:12:56.211840 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:12:56.211913 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 00:12:56.211977 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 00:12:56.212042 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:12:56.212101 kernel: pci_bus 0004:00: on NUMA node 0 May 17 00:12:56.212165 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 00:12:56.212227 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.212293 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.212357 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:12:56.212420 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.212482 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.212543 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.212610 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:12:56.212674 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:12:56.212739 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 00:12:56.212801 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:12:56.212863 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 00:12:56.212925 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:12:56.212988 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213049 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213112 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213174 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213238 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213300 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213362 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213424 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213486 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213548 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213612 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213675 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213739 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:12:56.213805 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213869 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213936 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 00:12:56.214003 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 00:12:56.214069 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 00:12:56.214136 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 00:12:56.214201 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 00:12:56.214267 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:12:56.214330 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 00:12:56.214392 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:12:56.214455 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:12:56.214519 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 00:12:56.214582 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.214648 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 00:12:56.214711 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:12:56.214776 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 00:12:56.214839 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 00:12:56.214900 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:12:56.214957 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:12:56.215012 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 00:12:56.215068 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 00:12:56.215136 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:12:56.215197 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:12:56.215258 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:12:56.215324 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 00:12:56.215381 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:12:56.215446 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 00:12:56.215506 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:12:56.215516 kernel: iommu: Default domain type: Translated May 17 00:12:56.215524 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:12:56.215533 kernel: efivars: Registered efivars operations May 17 00:12:56.215601 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 17 00:12:56.215669 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 00:12:56.215736 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 00:12:56.215747 kernel: vgaarb: loaded May 17 00:12:56.215757 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:12:56.215765 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:12:56.215774 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:12:56.215782 kernel: pnp: PnP ACPI init May 17 00:12:56.215851 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 00:12:56.215909 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 00:12:56.215967 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 00:12:56.216025 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 00:12:56.216082 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 00:12:56.216138 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 00:12:56.216197 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 00:12:56.216255 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 00:12:56.216265 kernel: pnp: PnP ACPI: found 1 devices May 17 00:12:56.216273 kernel: NET: Registered PF_INET protocol family May 17 00:12:56.216281 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216291 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:12:56.216300 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:12:56.216308 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:12:56.216317 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216325 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 00:12:56.216333 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216341 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216350 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:12:56.216414 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 00:12:56.216428 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 00:12:56.216436 kernel: kvm [1]: GICv3: no GICV resource entry May 17 00:12:56.216444 kernel: kvm [1]: disabling GICv2 emulation May 17 00:12:56.216453 kernel: kvm [1]: GIC system register CPU interface enabled May 17 00:12:56.216461 kernel: kvm [1]: vgic interrupt IRQ9 May 17 00:12:56.216469 kernel: kvm [1]: VHE mode initialized successfully May 17 00:12:56.216477 kernel: Initialise system trusted keyrings May 17 00:12:56.216485 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 00:12:56.216493 kernel: Key type asymmetric registered May 17 00:12:56.216502 kernel: Asymmetric key parser 'x509' registered May 17 00:12:56.216510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:12:56.216518 kernel: io scheduler mq-deadline registered May 17 00:12:56.216526 kernel: io scheduler kyber registered May 17 00:12:56.216535 kernel: io scheduler bfq registered May 17 00:12:56.216543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:12:56.216551 kernel: ACPI: button: Power Button [PWRB] May 17 00:12:56.216559 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 00:12:56.216567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:12:56.216641 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 00:12:56.216701 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.216760 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.216817 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 00:12:56.216876 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 17 00:12:56.216933 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 17 00:12:56.217002 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 00:12:56.217061 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.217119 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.217176 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 00:12:56.217234 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 17 00:12:56.217292 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 17 00:12:56.217357 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 00:12:56.217418 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.217476 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.217534 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 00:12:56.217594 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 17 00:12:56.217653 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 17 00:12:56.217718 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 00:12:56.217778 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.217836 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.217894 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 00:12:56.217952 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 17 00:12:56.218010 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 17 00:12:56.218081 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 00:12:56.218140 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.218200 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.218259 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 00:12:56.218317 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 17 00:12:56.218375 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 17 00:12:56.218442 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 00:12:56.218501 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.218560 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.218621 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 00:12:56.218680 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 17 00:12:56.218737 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 17 00:12:56.218802 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 00:12:56.218860 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.218918 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.218980 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 00:12:56.219039 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 17 00:12:56.219097 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 17 00:12:56.219162 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 00:12:56.219220 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.219278 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.219339 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 00:12:56.219397 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 17 00:12:56.219455 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 17 00:12:56.219466 kernel: thunder_xcv, ver 1.0 May 17 00:12:56.219474 kernel: thunder_bgx, ver 1.0 May 17 00:12:56.219482 kernel: nicpf, ver 1.0 May 17 00:12:56.219490 kernel: nicvf, ver 1.0 May 17 00:12:56.219552 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:12:56.219616 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:12:54 UTC (1747440774) May 17 00:12:56.219627 kernel: efifb: probing for efifb May 17 00:12:56.219635 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 00:12:56.219644 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:12:56.219652 kernel: efifb: scrolling: redraw May 17 00:12:56.219660 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:12:56.219668 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:12:56.219676 kernel: fb0: EFI VGA frame buffer device May 17 00:12:56.219686 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 00:12:56.219694 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:12:56.219703 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:12:56.219711 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:12:56.219719 kernel: watchdog: Hard watchdog permanently disabled May 17 00:12:56.219727 kernel: NET: Registered PF_INET6 protocol family May 17 00:12:56.219737 kernel: Segment Routing with IPv6 May 17 00:12:56.219745 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:12:56.219753 kernel: NET: Registered PF_PACKET protocol family May 17 00:12:56.219763 kernel: Key type dns_resolver registered May 17 00:12:56.219771 kernel: registered taskstats version 1 May 17 00:12:56.219779 kernel: Loading compiled-in X.509 certificates May 17 00:12:56.219787 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:12:56.219795 kernel: Key type .fscrypt registered May 17 00:12:56.219803 kernel: Key type fscrypt-provisioning registered May 17 00:12:56.219811 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:12:56.219819 kernel: ima: Allocated hash algorithm: sha1 May 17 00:12:56.219827 kernel: ima: No architecture policies found May 17 00:12:56.219835 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:12:56.219901 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 00:12:56.219966 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 00:12:56.220031 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 00:12:56.220095 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 00:12:56.220160 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 00:12:56.220224 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 00:12:56.220288 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 00:12:56.220352 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 00:12:56.220419 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 00:12:56.220485 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 00:12:56.220549 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 00:12:56.220616 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 00:12:56.220681 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 00:12:56.220744 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 00:12:56.220810 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 00:12:56.220873 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 00:12:56.220941 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 00:12:56.221004 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 00:12:56.221069 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 00:12:56.221132 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 00:12:56.221197 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 00:12:56.221260 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 00:12:56.221325 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 00:12:56.221390 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 00:12:56.221459 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 00:12:56.221522 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 00:12:56.221590 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 00:12:56.221655 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 00:12:56.221719 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 00:12:56.221783 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 00:12:56.221848 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 00:12:56.221912 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 00:12:56.221976 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 00:12:56.222042 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 00:12:56.222106 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 00:12:56.222171 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 00:12:56.222234 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 00:12:56.222298 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 00:12:56.222363 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 00:12:56.222427 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 00:12:56.222491 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 00:12:56.222557 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 00:12:56.222627 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 00:12:56.222692 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 00:12:56.222758 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 00:12:56.222821 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 00:12:56.222886 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 00:12:56.222949 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 00:12:56.223015 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 00:12:56.223081 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 00:12:56.223146 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 00:12:56.223208 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 00:12:56.223273 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 00:12:56.223336 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 00:12:56.223401 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 00:12:56.223464 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 00:12:56.223529 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 00:12:56.223597 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 00:12:56.223662 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 00:12:56.223727 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 00:12:56.223793 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 00:12:56.223804 kernel: clk: Disabling unused clocks May 17 00:12:56.223813 kernel: Freeing unused kernel memory: 39424K May 17 00:12:56.223821 kernel: Run /init as init process May 17 00:12:56.223829 kernel: with arguments: May 17 00:12:56.223839 kernel: /init May 17 00:12:56.223846 kernel: with environment: May 17 00:12:56.223854 kernel: HOME=/ May 17 00:12:56.223862 kernel: TERM=linux May 17 00:12:56.223870 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:12:56.223880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:12:56.223891 systemd[1]: Detected architecture arm64. May 17 00:12:56.223899 systemd[1]: Running in initrd. May 17 00:12:56.223909 systemd[1]: No hostname configured, using default hostname. May 17 00:12:56.223917 systemd[1]: Hostname set to . May 17 00:12:56.223925 systemd[1]: Initializing machine ID from random generator. May 17 00:12:56.223934 systemd[1]: Queued start job for default target initrd.target. May 17 00:12:56.223943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:12:56.223951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:12:56.223960 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:12:56.223969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:12:56.223979 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:12:56.223988 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:12:56.223997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:12:56.224006 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:12:56.224015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:12:56.224023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:12:56.224032 systemd[1]: Reached target paths.target - Path Units. May 17 00:12:56.224042 systemd[1]: Reached target slices.target - Slice Units. May 17 00:12:56.224050 systemd[1]: Reached target swap.target - Swaps. May 17 00:12:56.224059 systemd[1]: Reached target timers.target - Timer Units. May 17 00:12:56.224067 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:12:56.224075 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:12:56.224084 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:12:56.224092 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:12:56.224101 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:12:56.224111 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:12:56.224119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:12:56.224128 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:12:56.224136 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:12:56.224145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:12:56.224153 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:12:56.224162 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:12:56.224170 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:12:56.224179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:12:56.224208 systemd-journald[898]: Collecting audit messages is disabled. May 17 00:12:56.224228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:56.224237 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:12:56.224245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:12:56.224255 kernel: Bridge firewalling registered May 17 00:12:56.224264 systemd-journald[898]: Journal started May 17 00:12:56.224283 systemd-journald[898]: Runtime Journal (/run/log/journal/501be0719bdb4ce09b7001a7f462581c) is 8.0M, max 4.0G, 3.9G free. May 17 00:12:56.183783 systemd-modules-load[900]: Inserted module 'overlay' May 17 00:12:56.264304 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:12:56.206341 systemd-modules-load[900]: Inserted module 'br_netfilter' May 17 00:12:56.270016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:12:56.280963 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:12:56.291908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:12:56.302691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:56.332724 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:56.339009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:12:56.357041 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:12:56.368458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:12:56.385245 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:56.401722 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:56.418474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:12:56.429968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:12:56.459693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:12:56.473090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:12:56.481730 dracut-cmdline[946]: dracut-dracut-053 May 17 00:12:56.492849 dracut-cmdline[946]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:12:56.486902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:12:56.501114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:12:56.510998 systemd-resolved[952]: Positive Trust Anchors: May 17 00:12:56.511008 systemd-resolved[952]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:12:56.511040 systemd-resolved[952]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:12:56.526257 systemd-resolved[952]: Defaulting to hostname 'linux'. May 17 00:12:56.539058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:12:56.558657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:12:56.663262 kernel: SCSI subsystem initialized May 17 00:12:56.674596 kernel: Loading iSCSI transport class v2.0-870. May 17 00:12:56.693596 kernel: iscsi: registered transport (tcp) May 17 00:12:56.721129 kernel: iscsi: registered transport (qla4xxx) May 17 00:12:56.721151 kernel: QLogic iSCSI HBA Driver May 17 00:12:56.764570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:12:56.783713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:12:56.828910 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:12:56.828930 kernel: device-mapper: uevent: version 1.0.3 May 17 00:12:56.847595 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:12:56.904599 kernel: raid6: neonx8 gen() 15849 MB/s May 17 00:12:56.930597 kernel: raid6: neonx4 gen() 15714 MB/s May 17 00:12:56.955597 kernel: raid6: neonx2 gen() 13274 MB/s May 17 00:12:56.980597 kernel: raid6: neonx1 gen() 10526 MB/s May 17 00:12:57.005597 kernel: raid6: int64x8 gen() 7000 MB/s May 17 00:12:57.030597 kernel: raid6: int64x4 gen() 7375 MB/s May 17 00:12:57.055597 kernel: raid6: int64x2 gen() 6153 MB/s May 17 00:12:57.083634 kernel: raid6: int64x1 gen() 5077 MB/s May 17 00:12:57.083655 kernel: raid6: using algorithm neonx8 gen() 15849 MB/s May 17 00:12:57.118051 kernel: raid6: .... xor() 11973 MB/s, rmw enabled May 17 00:12:57.118072 kernel: raid6: using neon recovery algorithm May 17 00:12:57.141190 kernel: xor: measuring software checksum speed May 17 00:12:57.141215 kernel: 8regs : 19769 MB/sec May 17 00:12:57.153593 kernel: 32regs : 19308 MB/sec May 17 00:12:57.164598 kernel: arm64_neon : 26518 MB/sec May 17 00:12:57.164619 kernel: xor: using function: arm64_neon (26518 MB/sec) May 17 00:12:57.225599 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:12:57.235961 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:12:57.257709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:12:57.270876 systemd-udevd[1143]: Using default interface naming scheme 'v255'. May 17 00:12:57.273915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:12:57.291741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:12:57.305995 dracut-pre-trigger[1153]: rd.md=0: removing MD RAID activation May 17 00:12:57.332414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:12:57.354708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:12:57.460336 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:12:57.481762 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:12:57.653484 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:12:57.653507 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:12:57.653521 kernel: ACPI: bus type USB registered May 17 00:12:57.653531 kernel: usbcore: registered new interface driver usbfs May 17 00:12:57.653541 kernel: usbcore: registered new interface driver hub May 17 00:12:57.653550 kernel: usbcore: registered new device driver usb May 17 00:12:57.653560 kernel: PTP clock support registered May 17 00:12:57.653570 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 May 17 00:12:57.653726 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:12:57.653810 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 00:12:57.653891 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 00:12:57.653972 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:12:57.653982 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:12:57.653992 kernel: igb 0003:03:00.0: Adding to iommu group 32 May 17 00:12:57.654079 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 May 17 00:12:57.557022 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:12:57.688076 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 17 00:12:57.688193 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 17 00:12:57.557085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:57.682476 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:57.693773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:12:57.693843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:57.712565 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:57.737680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:57.745615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:12:57.760606 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:12:57.965750 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 17 00:12:57.965976 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:12:57.966058 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 00:12:57.966135 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 00:12:57.966210 kernel: hub 1-0:1.0: USB hub found May 17 00:12:57.966306 kernel: hub 1-0:1.0: 4 ports detected May 17 00:12:57.966383 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:12:57.966470 kernel: hub 2-0:1.0: USB hub found May 17 00:12:57.966555 kernel: hub 2-0:1.0: 4 ports detected May 17 00:12:57.966638 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 00:12:57.966724 kernel: mlx5_core 0001:01:00.0: firmware version: 14.30.1004 May 17 00:12:57.966810 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:12:57.966887 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 00:12:57.770698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:12:57.784701 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:12:57.924720 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:12:58.068643 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 00:12:58.068854 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:12:58.068939 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6c May 17 00:12:58.069014 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 00:12:58.069088 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:12:58.069163 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 17 00:12:58.069244 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 00:12:57.979732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:58.088549 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 00:12:58.000706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:58.234014 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 00:12:58.234151 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:12:58.234164 kernel: GPT:9289727 != 1875385007 May 17 00:12:58.234180 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:12:58.234189 kernel: GPT:9289727 != 1875385007 May 17 00:12:58.234199 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:12:58.234208 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:12:58.234218 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 00:12:58.234299 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 00:12:58.234389 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1200) May 17 00:12:58.234400 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (1219) May 17 00:12:58.234412 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 00:12:58.234490 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6d May 17 00:12:58.234566 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 00:12:58.073792 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:12:58.305849 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:12:58.305973 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 00:12:58.306051 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 00:12:58.149624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 17 00:12:58.330546 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 00:12:58.301372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:58.330898 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 17 00:12:58.350298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 00:12:58.365328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:12:58.405008 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 00:12:58.377519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:12:58.425737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:12:58.453686 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:12:58.453701 disk-uuid[1315]: Primary Header is updated. May 17 00:12:58.453701 disk-uuid[1315]: Secondary Entries is updated. May 17 00:12:58.453701 disk-uuid[1315]: Secondary Header is updated. May 17 00:12:58.530465 kernel: hub 1-3:1.0: USB hub found May 17 00:12:58.530727 kernel: hub 1-3:1.0: 4 ports detected May 17 00:12:58.592600 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:12:58.605593 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 00:12:58.628809 kernel: mlx5_core 0001:01:00.1: firmware version: 14.30.1004 May 17 00:12:58.628961 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:12:58.657594 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 00:12:58.700084 kernel: hub 2-3:1.0: USB hub found May 17 00:12:58.700276 kernel: hub 2-3:1.0: 4 ports detected May 17 00:12:58.928220 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 00:12:59.211600 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:12:59.226595 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 00:12:59.244595 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 00:12:59.452345 disk-uuid[1316]: The operation has completed successfully. May 17 00:12:59.457988 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:12:59.473552 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:12:59.473639 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:12:59.517695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:12:59.528038 sh[1482]: Success May 17 00:12:59.546603 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:12:59.579906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:12:59.599744 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:12:59.610037 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:12:59.615593 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:12:59.615613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:59.615623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:12:59.615634 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:12:59.615644 kernel: BTRFS info (device dm-0): using free space tree May 17 00:12:59.618592 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:12:59.704628 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:12:59.711300 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:12:59.722762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:12:59.803708 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:59.803722 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:59.803732 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:12:59.803742 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:12:59.803751 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:12:59.731570 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:12:59.842126 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:59.831982 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:12:59.866703 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:12:59.913663 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:12:59.934269 ignition[1591]: Ignition 2.19.0 May 17 00:12:59.934276 ignition[1591]: Stage: fetch-offline May 17 00:12:59.938021 unknown[1591]: fetched base config from "system" May 17 00:12:59.934321 ignition[1591]: no configs at "/usr/lib/ignition/base.d" May 17 00:12:59.938028 unknown[1591]: fetched user config from "system" May 17 00:12:59.934329 ignition[1591]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:12:59.945752 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:12:59.934474 ignition[1591]: parsed url from cmdline: "" May 17 00:12:59.956722 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:12:59.934478 ignition[1591]: no config URL provided May 17 00:12:59.968905 systemd-networkd[1714]: lo: Link UP May 17 00:12:59.934482 ignition[1591]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:12:59.968908 systemd-networkd[1714]: lo: Gained carrier May 17 00:12:59.934534 ignition[1591]: parsing config with SHA512: 818ac516e6477d480d76b184d69e9bf370c42d49acd8c286ed77f0ebc672e798ddd7a66bfac932d56bf402354122cd19e0e910e6813420b97e10863b00e186da May 17 00:12:59.972422 systemd-networkd[1714]: Enumeration completed May 17 00:12:59.939873 ignition[1591]: fetch-offline: fetch-offline passed May 17 00:12:59.972606 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:12:59.939879 ignition[1591]: POST message to Packet Timeline May 17 00:12:59.973565 systemd-networkd[1714]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:12:59.939884 ignition[1591]: POST Status error: resource requires networking May 17 00:12:59.978459 systemd[1]: Reached target network.target - Network. May 17 00:12:59.939964 ignition[1591]: Ignition finished successfully May 17 00:12:59.988559 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:13:00.026205 ignition[1723]: Ignition 2.19.0 May 17 00:13:00.001785 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:13:00.026211 ignition[1723]: Stage: kargs May 17 00:13:00.025483 systemd-networkd[1714]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:00.026433 ignition[1723]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:00.077188 systemd-networkd[1714]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:00.026442 ignition[1723]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:00.027445 ignition[1723]: kargs: kargs passed May 17 00:13:00.027449 ignition[1723]: POST message to Packet Timeline May 17 00:13:00.027461 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:00.029972 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47133->[::1]:53: read: connection refused May 17 00:13:00.230085 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:13:00.230485 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51606->[::1]:53: read: connection refused May 17 00:13:00.608603 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 00:13:00.611267 systemd-networkd[1714]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:00.630640 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:13:00.630974 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49118->[::1]:53: read: connection refused May 17 00:13:01.206601 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 00:13:01.209243 systemd-networkd[1714]: eno1: Link UP May 17 00:13:01.209372 systemd-networkd[1714]: eno2: Link UP May 17 00:13:01.209488 systemd-networkd[1714]: enP1p1s0f0np0: Link UP May 17 00:13:01.209585 systemd-networkd[1714]: enP1p1s0f0np0: Gained carrier May 17 00:13:01.220872 systemd-networkd[1714]: enP1p1s0f1np1: Link UP May 17 00:13:01.252621 systemd-networkd[1714]: enP1p1s0f0np0: DHCPv4 address 147.28.151.230/30, gateway 147.28.151.229 acquired from 147.28.144.140 May 17 00:13:01.431828 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:13:01.432525 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54497->[::1]:53: read: connection refused May 17 00:13:01.616031 systemd-networkd[1714]: enP1p1s0f1np1: Gained carrier May 17 00:13:02.215816 systemd-networkd[1714]: enP1p1s0f0np0: Gained IPv6LL May 17 00:13:02.983793 systemd-networkd[1714]: enP1p1s0f1np1: Gained IPv6LL May 17 00:13:03.034135 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:13:03.034702 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49755->[::1]:53: read: connection refused May 17 00:13:06.237759 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:13:07.380557 ignition[1723]: GET result: OK May 17 00:13:08.272388 ignition[1723]: Ignition finished successfully May 17 00:13:08.275950 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:13:08.286706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:13:08.303014 ignition[1747]: Ignition 2.19.0 May 17 00:13:08.303021 ignition[1747]: Stage: disks May 17 00:13:08.303208 ignition[1747]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:08.303217 ignition[1747]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:08.304292 ignition[1747]: disks: disks passed May 17 00:13:08.304296 ignition[1747]: POST message to Packet Timeline May 17 00:13:08.304309 ignition[1747]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:08.816809 ignition[1747]: GET result: OK May 17 00:13:09.191846 ignition[1747]: Ignition finished successfully May 17 00:13:09.193922 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:13:09.200298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:13:09.207858 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:13:09.215789 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:13:09.224314 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:13:09.233100 systemd[1]: Reached target basic.target - Basic System. May 17 00:13:09.250736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:13:09.266056 systemd-fsck[1770]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:13:09.269663 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:13:09.289687 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:13:09.354508 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:13:09.359446 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:13:09.364727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:13:09.386672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:09.394593 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1780) May 17 00:13:09.394610 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:09.394621 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:09.394631 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:09.395593 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:09.395603 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:09.489667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:13:09.496062 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:13:09.507453 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:13:09.522723 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:13:09.522752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:09.535867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:09.566507 coreos-metadata[1800]: May 17 00:13:09.551 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:09.585641 coreos-metadata[1798]: May 17 00:13:09.551 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:09.549863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:13:09.574705 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:13:09.613858 initrd-setup-root[1819]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:13:09.619869 initrd-setup-root[1826]: cut: /sysroot/etc/group: No such file or directory May 17 00:13:09.625785 initrd-setup-root[1833]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:13:09.631726 initrd-setup-root[1840]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:13:09.700409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:13:09.722692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:13:09.734511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:13:09.759900 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:09.765791 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:13:09.786808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:13:09.803036 ignition[1913]: INFO : Ignition 2.19.0 May 17 00:13:09.803036 ignition[1913]: INFO : Stage: mount May 17 00:13:09.814031 ignition[1913]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:09.814031 ignition[1913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:09.814031 ignition[1913]: INFO : mount: mount passed May 17 00:13:09.814031 ignition[1913]: INFO : POST message to Packet Timeline May 17 00:13:09.814031 ignition[1913]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:09.983814 coreos-metadata[1798]: May 17 00:13:09.983 INFO Fetch successful May 17 00:13:10.028876 coreos-metadata[1798]: May 17 00:13:10.028 INFO wrote hostname ci-4081.3.3-n-3bfd76e738 to /sysroot/etc/hostname May 17 00:13:10.032021 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:10.220030 coreos-metadata[1800]: May 17 00:13:10.220 INFO Fetch successful May 17 00:13:10.268552 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:13:10.268722 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:13:10.312714 ignition[1913]: INFO : GET result: OK May 17 00:13:10.608789 ignition[1913]: INFO : Ignition finished successfully May 17 00:13:10.611008 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:13:10.633693 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:13:10.645982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:10.681448 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1937) May 17 00:13:10.681485 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:10.695877 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:10.708925 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:10.731863 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:10.731885 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:10.739959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:10.768964 ignition[1954]: INFO : Ignition 2.19.0 May 17 00:13:10.768964 ignition[1954]: INFO : Stage: files May 17 00:13:10.778463 ignition[1954]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:10.778463 ignition[1954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:10.778463 ignition[1954]: DEBUG : files: compiled without relabeling support, skipping May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:13:10.778463 ignition[1954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:13:10.778463 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:13:10.774416 unknown[1954]: wrote ssh authorized keys file for user: core May 17 00:13:11.698056 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:13:12.743038 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:13:13.145730 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:13:13.458731 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:13.458731 ignition[1954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:13.483558 ignition[1954]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:13.483558 ignition[1954]: INFO : files: files passed May 17 00:13:13.483558 ignition[1954]: INFO : POST message to Packet Timeline May 17 00:13:13.483558 ignition[1954]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:13.986930 ignition[1954]: INFO : GET result: OK May 17 00:13:14.276585 ignition[1954]: INFO : Ignition finished successfully May 17 00:13:14.278954 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:13:14.300718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:13:14.313258 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:13:14.332055 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:13:14.332133 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:13:14.350549 initrd-setup-root-after-ignition[1999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:14.350549 initrd-setup-root-after-ignition[1999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:14.345035 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:14.403333 initrd-setup-root-after-ignition[2004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:14.358171 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:13:14.383791 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:13:14.417675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:13:14.417767 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:13:14.427667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:13:14.443919 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:13:14.455529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:13:14.470768 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:13:14.492857 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:14.513749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:13:14.528804 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:14.538415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:14.549991 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:13:14.561612 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:13:14.561714 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:14.573369 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:13:14.584680 systemd[1]: Stopped target basic.target - Basic System. May 17 00:13:14.596222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:13:14.607779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:14.619144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:13:14.630540 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:13:14.641875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:13:14.653273 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:13:14.664747 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:13:14.681774 systemd[1]: Stopped target swap.target - Swaps. May 17 00:13:14.693094 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:13:14.693193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:13:14.704661 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:14.715772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:14.727098 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:13:14.731626 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:14.738496 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:13:14.738600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:13:14.750056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:13:14.750176 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:13:14.761438 systemd[1]: Stopped target paths.target - Path Units. May 17 00:13:14.772673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:13:14.772767 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:14.790011 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:13:14.801515 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:13:14.813097 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:13:14.813192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:13:14.824719 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:13:14.824810 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:13:14.938109 ignition[2026]: INFO : Ignition 2.19.0 May 17 00:13:14.938109 ignition[2026]: INFO : Stage: umount May 17 00:13:14.938109 ignition[2026]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:14.938109 ignition[2026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:14.938109 ignition[2026]: INFO : umount: umount passed May 17 00:13:14.938109 ignition[2026]: INFO : POST message to Packet Timeline May 17 00:13:14.938109 ignition[2026]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:14.836491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:13:14.836581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:14.848162 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:13:14.848247 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:13:14.859840 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:13:14.859923 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:14.887798 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:13:14.896384 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:13:14.908496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:13:14.908612 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:14.920789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:13:14.920879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:13:14.934621 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:13:14.935529 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:13:14.935615 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:13:14.945565 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:13:14.945726 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:13:15.509895 ignition[2026]: INFO : GET result: OK May 17 00:13:15.834431 ignition[2026]: INFO : Ignition finished successfully May 17 00:13:15.837291 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:13:15.837441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:13:15.844469 systemd[1]: Stopped target network.target - Network. May 17 00:13:15.853470 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:13:15.853529 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:13:15.862975 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:13:15.863046 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:13:15.872378 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:13:15.872416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:13:15.881773 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:13:15.881805 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:13:15.891508 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:13:15.891536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:13:15.901331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:13:15.906607 systemd-networkd[1714]: enP1p1s0f0np0: DHCPv6 lease lost May 17 00:13:15.911014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:13:15.915745 systemd-networkd[1714]: enP1p1s0f1np1: DHCPv6 lease lost May 17 00:13:15.922209 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:13:15.922460 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:13:15.933278 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:13:15.934679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:13:15.942482 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:13:15.942678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:15.964728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:13:15.970570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:13:15.970626 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:13:15.980610 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:13:15.980645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:15.990689 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:13:15.990719 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:13:16.000958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:13:16.000989 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:16.011397 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:16.035951 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:13:16.036075 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:16.045212 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:13:16.045357 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:13:16.054285 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:13:16.054321 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:16.064911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:13:16.064948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:13:16.081013 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:13:16.081055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:13:16.091699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:13:16.091735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:16.115772 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:13:16.124870 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:13:16.124933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:16.135879 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:13:16.135909 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:13:16.152314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:13:16.152347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:16.169662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:13:16.169708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:16.181661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:13:16.181732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:13:16.688810 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:13:16.689703 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:13:16.700425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:13:16.722700 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:13:16.736318 systemd[1]: Switching root. May 17 00:13:16.797122 systemd-journald[898]: Journal stopped May 17 00:12:56.164269 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 00:12:56.164291 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:12:56.164299 kernel: KASLR enabled May 17 00:12:56.164305 kernel: efi: EFI v2.7 by American Megatrends May 17 00:12:56.164311 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf00018 MEMRESERVE=0xe45e8f98 May 17 00:12:56.164317 kernel: random: crng init done May 17 00:12:56.164324 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. May 17 00:12:56.164330 kernel: ACPI: Early table checksum verification disabled May 17 00:12:56.164337 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 00:12:56.164343 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 00:12:56.164349 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 00:12:56.164355 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 00:12:56.164361 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 00:12:56.164368 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 00:12:56.164377 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 00:12:56.164384 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:12:56.164390 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 00:12:56.164397 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:12:56.164403 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 00:12:56.164410 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164416 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164422 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164429 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 00:12:56.164436 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 00:12:56.164443 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 00:12:56.164449 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 00:12:56.164456 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 00:12:56.164462 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 00:12:56.164468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 00:12:56.164475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 00:12:56.164481 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 00:12:56.164488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 00:12:56.164494 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] May 17 00:12:56.164500 kernel: Zone ranges: May 17 00:12:56.164507 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 00:12:56.164514 kernel: DMA32 empty May 17 00:12:56.164521 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 00:12:56.164527 kernel: Movable zone start for each node May 17 00:12:56.164533 kernel: Early memory node ranges May 17 00:12:56.164540 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 00:12:56.164549 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 00:12:56.164556 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 00:12:56.164564 kernel: node 0: [mem 0x0000000094000000-0x00000000eba37fff] May 17 00:12:56.164571 kernel: node 0: [mem 0x00000000eba38000-0x00000000ebeccfff] May 17 00:12:56.164578 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 00:12:56.164584 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 00:12:56.164595 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 00:12:56.164602 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 00:12:56.164608 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 00:12:56.164615 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 00:12:56.164622 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 00:12:56.164628 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 00:12:56.164637 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 00:12:56.164644 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 00:12:56.164650 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 00:12:56.164657 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 00:12:56.164664 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 00:12:56.164670 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 00:12:56.164677 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 00:12:56.164684 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 00:12:56.164691 kernel: psci: probing for conduit method from ACPI. May 17 00:12:56.164697 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:12:56.164704 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:12:56.164712 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:12:56.164719 kernel: psci: SMC Calling Convention v1.2 May 17 00:12:56.164725 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 00:12:56.164732 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 00:12:56.164739 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 00:12:56.164746 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 00:12:56.164752 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 00:12:56.164759 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 00:12:56.164766 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 00:12:56.164773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 00:12:56.164779 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 00:12:56.164786 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 00:12:56.164794 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 00:12:56.164801 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 00:12:56.164808 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 00:12:56.164814 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 00:12:56.164821 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 00:12:56.164828 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 00:12:56.164834 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 00:12:56.164841 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 00:12:56.164848 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 00:12:56.164854 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 00:12:56.164861 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 00:12:56.164868 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 00:12:56.164876 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 00:12:56.164883 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 00:12:56.164889 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 00:12:56.164896 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 00:12:56.164903 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 00:12:56.164909 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 00:12:56.164916 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 00:12:56.164923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 00:12:56.164930 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 00:12:56.164937 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 00:12:56.164943 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 00:12:56.164951 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 00:12:56.164958 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 00:12:56.164965 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 00:12:56.164972 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 00:12:56.164978 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 00:12:56.164985 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 00:12:56.164992 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 00:12:56.164998 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 00:12:56.165005 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 00:12:56.165012 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 00:12:56.165018 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 00:12:56.165026 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 00:12:56.165034 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 00:12:56.165040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 00:12:56.165047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 00:12:56.165054 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 00:12:56.165060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 00:12:56.165067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 00:12:56.165074 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 00:12:56.165081 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 00:12:56.165095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 00:12:56.165102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 00:12:56.165111 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 00:12:56.165118 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 00:12:56.165125 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 00:12:56.165132 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 00:12:56.165139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 00:12:56.165146 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 00:12:56.165154 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 00:12:56.165161 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 00:12:56.165169 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 00:12:56.165176 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 00:12:56.165183 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 00:12:56.165190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 00:12:56.165197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 00:12:56.165204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 00:12:56.165212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 00:12:56.165219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 00:12:56.165226 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 00:12:56.165233 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 00:12:56.165241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 00:12:56.165248 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 00:12:56.165255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 00:12:56.165263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 00:12:56.165270 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 00:12:56.165277 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 00:12:56.165284 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 00:12:56.165291 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:12:56.165299 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:12:56.165306 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 00:12:56.165313 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 00:12:56.165322 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 00:12:56.165329 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 00:12:56.165336 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 00:12:56.165343 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 00:12:56.165350 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 00:12:56.165357 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 00:12:56.165364 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 00:12:56.165371 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 00:12:56.165378 kernel: Detected PIPT I-cache on CPU0 May 17 00:12:56.165385 kernel: CPU features: detected: GIC system register CPU interface May 17 00:12:56.165392 kernel: CPU features: detected: Virtualization Host Extensions May 17 00:12:56.165401 kernel: CPU features: detected: Hardware dirty bit management May 17 00:12:56.165408 kernel: CPU features: detected: Spectre-v4 May 17 00:12:56.165415 kernel: CPU features: detected: Spectre-BHB May 17 00:12:56.165422 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:12:56.165430 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:12:56.165437 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:12:56.165444 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:12:56.165451 kernel: alternatives: applying boot alternatives May 17 00:12:56.165460 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:12:56.165468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:12:56.165476 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 00:12:56.165483 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 00:12:56.165490 kernel: printk: log_buf_len min size: 262144 bytes May 17 00:12:56.165498 kernel: printk: log_buf_len: 1048576 bytes May 17 00:12:56.165505 kernel: printk: early log buf free: 250032(95%) May 17 00:12:56.165512 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 00:12:56.165520 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 00:12:56.165527 kernel: Fallback order for Node 0: 0 May 17 00:12:56.165534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 00:12:56.165541 kernel: Policy zone: Normal May 17 00:12:56.165549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:12:56.165556 kernel: software IO TLB: area num 128. May 17 00:12:56.165565 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 00:12:56.165572 kernel: Memory: 262922452K/268174336K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 5251884K reserved, 0K cma-reserved) May 17 00:12:56.165580 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 00:12:56.165587 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:12:56.165597 kernel: rcu: RCU event tracing is enabled. May 17 00:12:56.165605 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 00:12:56.165612 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:12:56.165619 kernel: Tracing variant of Tasks RCU enabled. May 17 00:12:56.165627 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:12:56.165634 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 00:12:56.165642 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:12:56.165651 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 00:12:56.165658 kernel: GICv3: 672 SPIs implemented May 17 00:12:56.165665 kernel: GICv3: 0 Extended SPIs implemented May 17 00:12:56.165672 kernel: Root IRQ handler: gic_handle_irq May 17 00:12:56.165679 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:12:56.165687 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 00:12:56.165694 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 00:12:56.165701 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 00:12:56.165708 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 00:12:56.165715 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 00:12:56.165722 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 00:12:56.165729 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 00:12:56.165736 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 00:12:56.165745 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 00:12:56.165752 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 00:12:56.165760 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165767 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165774 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 00:12:56.165782 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165789 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165796 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 00:12:56.165804 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165811 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165818 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 00:12:56.165827 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165835 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165842 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 00:12:56.165849 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165856 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165864 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 00:12:56.165871 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165878 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165886 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 00:12:56.165893 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165900 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165909 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 00:12:56.165916 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:12:56.165923 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 17 00:12:56.165931 kernel: GICv3: using LPI property table @0x00000800003e0000 May 17 00:12:56.165938 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 17 00:12:56.165945 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:12:56.165952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.165960 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 00:12:56.165967 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 00:12:56.165974 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:12:56.165982 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:12:56.165990 kernel: Console: colour dummy device 80x25 May 17 00:12:56.165998 kernel: printk: console [tty0] enabled May 17 00:12:56.166005 kernel: ACPI: Core revision 20230628 May 17 00:12:56.166013 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:12:56.166020 kernel: pid_max: default: 81920 minimum: 640 May 17 00:12:56.166028 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:12:56.166035 kernel: landlock: Up and running. May 17 00:12:56.166042 kernel: SELinux: Initializing. May 17 00:12:56.166050 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:56.166059 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:56.166066 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:12:56.166074 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 17 00:12:56.166081 kernel: rcu: Hierarchical SRCU implementation. May 17 00:12:56.166089 kernel: rcu: Max phase no-delay instances is 400. May 17 00:12:56.166096 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 00:12:56.166103 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 00:12:56.166110 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 00:12:56.166118 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 00:12:56.166126 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 00:12:56.166133 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 00:12:56.166140 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 00:12:56.166148 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 00:12:56.166155 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 00:12:56.166162 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 00:12:56.166169 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 00:12:56.166176 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 00:12:56.166184 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 00:12:56.166191 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 00:12:56.166199 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 00:12:56.166206 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 00:12:56.166214 kernel: Remapping and enabling EFI services. May 17 00:12:56.166221 kernel: smp: Bringing up secondary CPUs ... May 17 00:12:56.166228 kernel: Detected PIPT I-cache on CPU1 May 17 00:12:56.166235 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 00:12:56.166243 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 17 00:12:56.166250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166257 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 00:12:56.166266 kernel: Detected PIPT I-cache on CPU2 May 17 00:12:56.166274 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 00:12:56.166281 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 17 00:12:56.166288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166295 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 00:12:56.166303 kernel: Detected PIPT I-cache on CPU3 May 17 00:12:56.166310 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 00:12:56.166317 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 17 00:12:56.166325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166332 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 00:12:56.166340 kernel: Detected PIPT I-cache on CPU4 May 17 00:12:56.166347 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 00:12:56.166355 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 17 00:12:56.166362 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166369 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 00:12:56.166376 kernel: Detected PIPT I-cache on CPU5 May 17 00:12:56.166384 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 00:12:56.166391 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 17 00:12:56.166398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166407 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 00:12:56.166414 kernel: Detected PIPT I-cache on CPU6 May 17 00:12:56.166421 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 00:12:56.166429 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 17 00:12:56.166436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166443 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 00:12:56.166450 kernel: Detected PIPT I-cache on CPU7 May 17 00:12:56.166458 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 00:12:56.166465 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 17 00:12:56.166474 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166481 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 00:12:56.166488 kernel: Detected PIPT I-cache on CPU8 May 17 00:12:56.166496 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 00:12:56.166503 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 17 00:12:56.166510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166517 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 00:12:56.166524 kernel: Detected PIPT I-cache on CPU9 May 17 00:12:56.166532 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 00:12:56.166539 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 17 00:12:56.166547 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166554 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 00:12:56.166561 kernel: Detected PIPT I-cache on CPU10 May 17 00:12:56.166569 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 00:12:56.166576 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 17 00:12:56.166583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166593 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 00:12:56.166600 kernel: Detected PIPT I-cache on CPU11 May 17 00:12:56.166608 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 00:12:56.166615 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 17 00:12:56.166624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166631 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 00:12:56.166638 kernel: Detected PIPT I-cache on CPU12 May 17 00:12:56.166646 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 00:12:56.166653 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 17 00:12:56.166660 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166667 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 00:12:56.166675 kernel: Detected PIPT I-cache on CPU13 May 17 00:12:56.166682 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 00:12:56.166691 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 17 00:12:56.166698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166705 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 00:12:56.166713 kernel: Detected PIPT I-cache on CPU14 May 17 00:12:56.166720 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 00:12:56.166727 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 17 00:12:56.166735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166742 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 00:12:56.166749 kernel: Detected PIPT I-cache on CPU15 May 17 00:12:56.166758 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 00:12:56.166765 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 17 00:12:56.166772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166780 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 00:12:56.166787 kernel: Detected PIPT I-cache on CPU16 May 17 00:12:56.166794 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 00:12:56.166802 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 17 00:12:56.166809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166816 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 00:12:56.166824 kernel: Detected PIPT I-cache on CPU17 May 17 00:12:56.166841 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 00:12:56.166850 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 17 00:12:56.166857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166865 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 00:12:56.166873 kernel: Detected PIPT I-cache on CPU18 May 17 00:12:56.166880 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 00:12:56.166888 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 17 00:12:56.166896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166903 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 00:12:56.166912 kernel: Detected PIPT I-cache on CPU19 May 17 00:12:56.166920 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 00:12:56.166929 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 17 00:12:56.166936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166944 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 00:12:56.166951 kernel: Detected PIPT I-cache on CPU20 May 17 00:12:56.166959 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 00:12:56.166968 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 17 00:12:56.166977 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.166985 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 00:12:56.166992 kernel: Detected PIPT I-cache on CPU21 May 17 00:12:56.167000 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 00:12:56.167008 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 17 00:12:56.167015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167023 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 00:12:56.167032 kernel: Detected PIPT I-cache on CPU22 May 17 00:12:56.167040 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 00:12:56.167047 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 17 00:12:56.167055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167063 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 00:12:56.167070 kernel: Detected PIPT I-cache on CPU23 May 17 00:12:56.167078 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 00:12:56.167085 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 17 00:12:56.167093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167101 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 00:12:56.167110 kernel: Detected PIPT I-cache on CPU24 May 17 00:12:56.167118 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 00:12:56.167125 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 17 00:12:56.167133 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167141 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 00:12:56.167148 kernel: Detected PIPT I-cache on CPU25 May 17 00:12:56.167156 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 00:12:56.167164 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 17 00:12:56.167171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167180 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 00:12:56.167188 kernel: Detected PIPT I-cache on CPU26 May 17 00:12:56.167196 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 00:12:56.167203 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 17 00:12:56.167211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167219 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 00:12:56.167226 kernel: Detected PIPT I-cache on CPU27 May 17 00:12:56.167234 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 00:12:56.167242 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 17 00:12:56.167249 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167258 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 00:12:56.167266 kernel: Detected PIPT I-cache on CPU28 May 17 00:12:56.167273 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 00:12:56.167281 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 17 00:12:56.167289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167296 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 00:12:56.167304 kernel: Detected PIPT I-cache on CPU29 May 17 00:12:56.167312 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 00:12:56.167319 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 17 00:12:56.167328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167336 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 00:12:56.167343 kernel: Detected PIPT I-cache on CPU30 May 17 00:12:56.167351 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 00:12:56.167359 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 17 00:12:56.167366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167374 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 00:12:56.167382 kernel: Detected PIPT I-cache on CPU31 May 17 00:12:56.167389 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 00:12:56.167397 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 17 00:12:56.167406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167414 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 00:12:56.167421 kernel: Detected PIPT I-cache on CPU32 May 17 00:12:56.167429 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 00:12:56.167437 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 17 00:12:56.167444 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167452 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 00:12:56.167459 kernel: Detected PIPT I-cache on CPU33 May 17 00:12:56.167467 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 00:12:56.167476 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 17 00:12:56.167484 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167492 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 00:12:56.167501 kernel: Detected PIPT I-cache on CPU34 May 17 00:12:56.167508 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 00:12:56.167516 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 17 00:12:56.167524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167531 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 00:12:56.167539 kernel: Detected PIPT I-cache on CPU35 May 17 00:12:56.167547 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 00:12:56.167556 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 17 00:12:56.167564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167571 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 00:12:56.167579 kernel: Detected PIPT I-cache on CPU36 May 17 00:12:56.167586 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 00:12:56.167596 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 17 00:12:56.167604 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167612 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 00:12:56.167619 kernel: Detected PIPT I-cache on CPU37 May 17 00:12:56.167629 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 00:12:56.167636 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 17 00:12:56.167644 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167652 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 00:12:56.167659 kernel: Detected PIPT I-cache on CPU38 May 17 00:12:56.167667 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 00:12:56.167675 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 17 00:12:56.167682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167690 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 00:12:56.167699 kernel: Detected PIPT I-cache on CPU39 May 17 00:12:56.167707 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 00:12:56.167715 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 17 00:12:56.167722 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167730 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 00:12:56.167738 kernel: Detected PIPT I-cache on CPU40 May 17 00:12:56.167745 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 00:12:56.167753 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 17 00:12:56.167762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167770 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 00:12:56.167777 kernel: Detected PIPT I-cache on CPU41 May 17 00:12:56.167785 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 00:12:56.167793 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 17 00:12:56.167801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167808 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 00:12:56.167816 kernel: Detected PIPT I-cache on CPU42 May 17 00:12:56.167824 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 00:12:56.167832 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 17 00:12:56.167841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167848 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 00:12:56.167856 kernel: Detected PIPT I-cache on CPU43 May 17 00:12:56.167864 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 00:12:56.167872 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 17 00:12:56.167879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167887 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 00:12:56.167894 kernel: Detected PIPT I-cache on CPU44 May 17 00:12:56.167902 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 00:12:56.167911 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 17 00:12:56.167919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167927 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 00:12:56.167934 kernel: Detected PIPT I-cache on CPU45 May 17 00:12:56.167942 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 00:12:56.167950 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 17 00:12:56.167957 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.167967 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 00:12:56.167975 kernel: Detected PIPT I-cache on CPU46 May 17 00:12:56.167982 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 00:12:56.167992 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 17 00:12:56.167999 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168007 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 00:12:56.168015 kernel: Detected PIPT I-cache on CPU47 May 17 00:12:56.168022 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 00:12:56.168030 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 17 00:12:56.168038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168045 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 00:12:56.168053 kernel: Detected PIPT I-cache on CPU48 May 17 00:12:56.168062 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 00:12:56.168070 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 17 00:12:56.168077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168085 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 00:12:56.168092 kernel: Detected PIPT I-cache on CPU49 May 17 00:12:56.168100 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 00:12:56.168108 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 17 00:12:56.168116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168123 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 00:12:56.168131 kernel: Detected PIPT I-cache on CPU50 May 17 00:12:56.168140 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 00:12:56.168148 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 17 00:12:56.168155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168163 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 00:12:56.168171 kernel: Detected PIPT I-cache on CPU51 May 17 00:12:56.168178 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 00:12:56.168186 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 17 00:12:56.168194 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168201 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 00:12:56.168210 kernel: Detected PIPT I-cache on CPU52 May 17 00:12:56.168218 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 00:12:56.168226 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 17 00:12:56.168235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168242 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 00:12:56.168250 kernel: Detected PIPT I-cache on CPU53 May 17 00:12:56.168258 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 00:12:56.168266 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 17 00:12:56.168273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168281 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 00:12:56.168290 kernel: Detected PIPT I-cache on CPU54 May 17 00:12:56.168298 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 00:12:56.168306 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 17 00:12:56.168313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168321 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 00:12:56.168329 kernel: Detected PIPT I-cache on CPU55 May 17 00:12:56.168336 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 00:12:56.168344 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 17 00:12:56.168352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168361 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 00:12:56.168368 kernel: Detected PIPT I-cache on CPU56 May 17 00:12:56.168376 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 00:12:56.168384 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 17 00:12:56.168392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168399 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 00:12:56.168407 kernel: Detected PIPT I-cache on CPU57 May 17 00:12:56.168415 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 00:12:56.168423 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 17 00:12:56.168431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168439 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 00:12:56.168447 kernel: Detected PIPT I-cache on CPU58 May 17 00:12:56.168454 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 00:12:56.168462 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 17 00:12:56.168470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168478 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 00:12:56.168486 kernel: Detected PIPT I-cache on CPU59 May 17 00:12:56.168493 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 00:12:56.168501 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 17 00:12:56.168510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168518 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 00:12:56.168526 kernel: Detected PIPT I-cache on CPU60 May 17 00:12:56.168534 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 00:12:56.168541 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 17 00:12:56.168549 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168557 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 00:12:56.168564 kernel: Detected PIPT I-cache on CPU61 May 17 00:12:56.168572 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 00:12:56.168581 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 17 00:12:56.168591 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168599 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 00:12:56.168606 kernel: Detected PIPT I-cache on CPU62 May 17 00:12:56.168614 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 00:12:56.168622 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 17 00:12:56.168630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168637 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 00:12:56.168645 kernel: Detected PIPT I-cache on CPU63 May 17 00:12:56.168653 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 00:12:56.168662 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 17 00:12:56.168670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168678 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 00:12:56.168686 kernel: Detected PIPT I-cache on CPU64 May 17 00:12:56.168693 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 00:12:56.168701 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 17 00:12:56.168709 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168716 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 00:12:56.168724 kernel: Detected PIPT I-cache on CPU65 May 17 00:12:56.168733 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 00:12:56.168740 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 17 00:12:56.168748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168756 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 00:12:56.168763 kernel: Detected PIPT I-cache on CPU66 May 17 00:12:56.168771 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 00:12:56.168779 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 17 00:12:56.168786 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168794 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 00:12:56.168802 kernel: Detected PIPT I-cache on CPU67 May 17 00:12:56.168811 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 00:12:56.168819 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 17 00:12:56.168826 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168834 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 00:12:56.168842 kernel: Detected PIPT I-cache on CPU68 May 17 00:12:56.168850 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 00:12:56.168857 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 17 00:12:56.168865 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168873 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 00:12:56.168882 kernel: Detected PIPT I-cache on CPU69 May 17 00:12:56.168890 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 00:12:56.168897 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 17 00:12:56.168905 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168913 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 00:12:56.168921 kernel: Detected PIPT I-cache on CPU70 May 17 00:12:56.168929 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 00:12:56.168936 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 17 00:12:56.168944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168952 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 00:12:56.168961 kernel: Detected PIPT I-cache on CPU71 May 17 00:12:56.168968 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 00:12:56.168976 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 17 00:12:56.168984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.168991 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 00:12:56.168999 kernel: Detected PIPT I-cache on CPU72 May 17 00:12:56.169007 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 00:12:56.169014 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 17 00:12:56.169022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169031 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 00:12:56.169039 kernel: Detected PIPT I-cache on CPU73 May 17 00:12:56.169047 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 00:12:56.169054 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 17 00:12:56.169062 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169070 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 00:12:56.169077 kernel: Detected PIPT I-cache on CPU74 May 17 00:12:56.169085 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 00:12:56.169093 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 17 00:12:56.169102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169110 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 00:12:56.169117 kernel: Detected PIPT I-cache on CPU75 May 17 00:12:56.169125 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 00:12:56.169133 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 17 00:12:56.169140 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169148 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 00:12:56.169156 kernel: Detected PIPT I-cache on CPU76 May 17 00:12:56.169163 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 00:12:56.169171 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 17 00:12:56.169180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169188 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 00:12:56.169195 kernel: Detected PIPT I-cache on CPU77 May 17 00:12:56.169203 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 00:12:56.169211 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 17 00:12:56.169218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169226 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 00:12:56.169234 kernel: Detected PIPT I-cache on CPU78 May 17 00:12:56.169241 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 00:12:56.169250 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 17 00:12:56.169258 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169265 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 00:12:56.169273 kernel: Detected PIPT I-cache on CPU79 May 17 00:12:56.169281 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 00:12:56.169288 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 17 00:12:56.169296 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:12:56.169304 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 00:12:56.169312 kernel: smp: Brought up 1 node, 80 CPUs May 17 00:12:56.169319 kernel: SMP: Total of 80 processors activated. May 17 00:12:56.169328 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:12:56.169336 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:12:56.169344 kernel: CPU features: detected: Common not Private translations May 17 00:12:56.169351 kernel: CPU features: detected: CRC32 instructions May 17 00:12:56.169359 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:12:56.169367 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:12:56.169375 kernel: CPU features: detected: LSE atomic instructions May 17 00:12:56.169382 kernel: CPU features: detected: Privileged Access Never May 17 00:12:56.169390 kernel: CPU features: detected: RAS Extension Support May 17 00:12:56.169399 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:12:56.169406 kernel: CPU: All CPU(s) started at EL2 May 17 00:12:56.169414 kernel: alternatives: applying system-wide alternatives May 17 00:12:56.169422 kernel: devtmpfs: initialized May 17 00:12:56.169430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:12:56.169437 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 00:12:56.169445 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:12:56.169453 kernel: SMBIOS 3.4.0 present. May 17 00:12:56.169461 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 00:12:56.169470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:12:56.169477 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 00:12:56.169485 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:12:56.169493 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:12:56.169501 kernel: audit: initializing netlink subsys (disabled) May 17 00:12:56.169509 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 17 00:12:56.169516 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:12:56.169524 kernel: cpuidle: using governor menu May 17 00:12:56.169532 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:12:56.169540 kernel: ASID allocator initialised with 32768 entries May 17 00:12:56.169548 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:12:56.169556 kernel: Serial: AMBA PL011 UART driver May 17 00:12:56.169564 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:12:56.169572 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:12:56.169579 kernel: Modules: 509024 pages in range for PLT usage May 17 00:12:56.169589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:12:56.169597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:12:56.169605 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:12:56.169614 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:12:56.169622 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:12:56.169629 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:12:56.169637 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:12:56.169645 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:12:56.169652 kernel: ACPI: Added _OSI(Module Device) May 17 00:12:56.169660 kernel: ACPI: Added _OSI(Processor Device) May 17 00:12:56.169667 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:12:56.169675 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:12:56.169684 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 00:12:56.169692 kernel: ACPI: Interpreter enabled May 17 00:12:56.169699 kernel: ACPI: Using GIC for interrupt routing May 17 00:12:56.169707 kernel: ACPI: MCFG table detected, 8 entries May 17 00:12:56.169715 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169723 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169730 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169738 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169746 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169755 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169763 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169771 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 00:12:56.169779 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 00:12:56.169787 kernel: printk: console [ttyAMA0] enabled May 17 00:12:56.169794 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 00:12:56.169802 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 00:12:56.169930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.170002 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.170066 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.170127 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.170189 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.170250 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 00:12:56.170261 kernel: PCI host bridge to bus 000d:00 May 17 00:12:56.170331 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 00:12:56.170392 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 00:12:56.170448 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 00:12:56.170526 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.170604 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.170672 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 00:12:56.170736 kernel: pci 000d:00:01.0: supports D1 D2 May 17 00:12:56.170804 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.170877 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.170942 kernel: pci 000d:00:02.0: supports D1 D2 May 17 00:12:56.171006 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.171076 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.171140 kernel: pci 000d:00:03.0: supports D1 D2 May 17 00:12:56.171204 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.171277 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.171344 kernel: pci 000d:00:04.0: supports D1 D2 May 17 00:12:56.171408 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.171418 kernel: acpiphp: Slot [1] registered May 17 00:12:56.171426 kernel: acpiphp: Slot [2] registered May 17 00:12:56.171434 kernel: acpiphp: Slot [3] registered May 17 00:12:56.171441 kernel: acpiphp: Slot [4] registered May 17 00:12:56.171502 kernel: pci_bus 000d:00: on NUMA node 0 May 17 00:12:56.171568 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.171665 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.171729 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.171793 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.171855 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.171917 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.171982 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.172047 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.172111 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.172176 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.172241 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.172303 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.172368 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 00:12:56.172434 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:12:56.172498 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 00:12:56.172561 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:12:56.172628 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 00:12:56.172691 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:12:56.172754 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 00:12:56.172816 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:12:56.172880 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.172945 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173010 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173073 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173137 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173201 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173265 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173329 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173393 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173456 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173518 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173582 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173650 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173714 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173777 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.173841 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.173903 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.173970 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 00:12:56.174033 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:12:56.174097 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.174160 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 00:12:56.174223 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:12:56.174287 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.174352 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 00:12:56.174416 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:12:56.174479 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.174542 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 00:12:56.174609 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:12:56.174668 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 00:12:56.174726 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 00:12:56.174797 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 00:12:56.174856 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 00:12:56.174924 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 00:12:56.174983 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 00:12:56.175058 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 00:12:56.175119 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 00:12:56.175184 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 00:12:56.175242 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 00:12:56.175252 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 00:12:56.175322 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.175388 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.175448 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.175511 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.175571 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 00:12:56.175637 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 00:12:56.175647 kernel: PCI host bridge to bus 0000:00 May 17 00:12:56.175711 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 00:12:56.175768 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:12:56.175823 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:12:56.175898 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.175969 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.176033 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 00:12:56.176096 kernel: pci 0000:00:01.0: supports D1 D2 May 17 00:12:56.176159 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176231 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.176297 kernel: pci 0000:00:02.0: supports D1 D2 May 17 00:12:56.176363 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176432 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.176496 kernel: pci 0000:00:03.0: supports D1 D2 May 17 00:12:56.176558 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176632 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.176695 kernel: pci 0000:00:04.0: supports D1 D2 May 17 00:12:56.176761 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.176771 kernel: acpiphp: Slot [1-1] registered May 17 00:12:56.176779 kernel: acpiphp: Slot [2-1] registered May 17 00:12:56.176787 kernel: acpiphp: Slot [3-1] registered May 17 00:12:56.176794 kernel: acpiphp: Slot [4-1] registered May 17 00:12:56.176849 kernel: pci_bus 0000:00: on NUMA node 0 May 17 00:12:56.176912 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.176974 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.177038 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.177104 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.177167 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.177229 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.177292 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.177356 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.177419 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.177485 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.177548 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.177614 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.177677 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 00:12:56.177742 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:12:56.177805 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 00:12:56.177868 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:12:56.177931 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 00:12:56.177996 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:12:56.178059 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 00:12:56.178123 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:12:56.178185 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178247 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178310 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178372 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178437 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178499 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178562 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178628 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178691 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178753 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178817 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.178879 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.178943 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.179007 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.179071 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.179133 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.179197 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.179259 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 00:12:56.179322 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:12:56.179386 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.179448 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 00:12:56.179514 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:12:56.179577 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.179643 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 00:12:56.179710 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:12:56.179775 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.179837 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 00:12:56.179901 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:12:56.179958 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 00:12:56.180015 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 00:12:56.180084 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 00:12:56.180143 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 00:12:56.180208 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 00:12:56.180267 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 00:12:56.180340 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 00:12:56.180400 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 00:12:56.180467 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 00:12:56.180525 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 00:12:56.180535 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 00:12:56.180607 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.180669 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.180731 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.180792 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.180856 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 00:12:56.180916 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 00:12:56.180926 kernel: PCI host bridge to bus 0005:00 May 17 00:12:56.180989 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 00:12:56.181045 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:12:56.181101 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 00:12:56.181169 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.181243 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.181307 kernel: pci 0005:00:01.0: supports D1 D2 May 17 00:12:56.181371 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181440 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.181503 kernel: pci 0005:00:03.0: supports D1 D2 May 17 00:12:56.181566 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181640 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.181704 kernel: pci 0005:00:05.0: supports D1 D2 May 17 00:12:56.181767 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181838 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:12:56.181905 kernel: pci 0005:00:07.0: supports D1 D2 May 17 00:12:56.181970 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 00:12:56.181980 kernel: acpiphp: Slot [1-2] registered May 17 00:12:56.181987 kernel: acpiphp: Slot [2-2] registered May 17 00:12:56.182061 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 00:12:56.182127 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 00:12:56.182193 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 00:12:56.182268 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 00:12:56.182336 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 00:12:56.182401 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 00:12:56.182459 kernel: pci_bus 0005:00: on NUMA node 0 May 17 00:12:56.182525 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.182592 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.182655 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.182720 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.182783 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.182847 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.182912 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.182978 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.183040 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:12:56.183104 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.183183 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.183250 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 00:12:56.183315 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 00:12:56.183380 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:12:56.183444 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 00:12:56.183506 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:12:56.183576 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 00:12:56.183645 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:12:56.183709 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 00:12:56.183772 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:12:56.183836 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.183905 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.183969 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184031 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184095 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184159 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184221 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184285 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184347 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184412 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184474 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184537 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184603 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184666 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184729 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.184793 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.184855 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.184919 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 00:12:56.184985 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:12:56.185049 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 00:12:56.185112 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 00:12:56.185176 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:12:56.185244 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 00:12:56.185309 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 00:12:56.185376 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 00:12:56.185438 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 00:12:56.185502 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:12:56.185568 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 00:12:56.185637 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 00:12:56.185713 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 00:12:56.185776 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 00:12:56.185842 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:12:56.185902 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 00:12:56.185957 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 00:12:56.186026 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 00:12:56.186086 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 00:12:56.186160 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 00:12:56.186223 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 00:12:56.186288 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 00:12:56.186346 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 00:12:56.186411 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 00:12:56.186470 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 00:12:56.186480 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 00:12:56.186550 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.186617 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.186678 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.186743 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.186803 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.186864 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 00:12:56.186874 kernel: PCI host bridge to bus 0003:00 May 17 00:12:56.186940 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 00:12:56.186997 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 00:12:56.187053 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 00:12:56.187123 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.187195 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.187259 kernel: pci 0003:00:01.0: supports D1 D2 May 17 00:12:56.187325 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.187398 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.187462 kernel: pci 0003:00:03.0: supports D1 D2 May 17 00:12:56.187525 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.187600 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.187664 kernel: pci 0003:00:05.0: supports D1 D2 May 17 00:12:56.187728 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.187741 kernel: acpiphp: Slot [1-3] registered May 17 00:12:56.187749 kernel: acpiphp: Slot [2-3] registered May 17 00:12:56.187823 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 00:12:56.187893 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 00:12:56.187961 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 00:12:56.188031 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 00:12:56.188097 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:12:56.188165 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 00:12:56.188233 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:12:56.188298 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 00:12:56.188363 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:12:56.188428 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 00:12:56.188500 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 00:12:56.188565 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 00:12:56.188637 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 00:12:56.188704 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 00:12:56.188769 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 00:12:56.188834 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 00:12:56.188901 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:12:56.188966 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 00:12:56.189030 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 00:12:56.189088 kernel: pci_bus 0003:00: on NUMA node 0 May 17 00:12:56.189153 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.189218 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.189280 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.189346 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.189408 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.189471 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.189534 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 00:12:56.189643 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 00:12:56.189708 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:12:56.189770 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:12:56.189831 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:12:56.189892 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:12:56.189966 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:12:56.190029 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:12:56.190091 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190155 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190217 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190278 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190339 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190400 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190462 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190524 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190586 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190654 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190717 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.190778 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.190841 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.190902 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:12:56.190964 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:12:56.191028 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 00:12:56.191090 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 00:12:56.191155 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:12:56.191221 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 00:12:56.191288 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 00:12:56.191354 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 00:12:56.191420 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 00:12:56.191485 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 00:12:56.191554 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 00:12:56.191622 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 00:12:56.191688 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 00:12:56.191753 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:12:56.191819 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.191883 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:12:56.191952 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.192018 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 00:12:56.192083 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.192149 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 00:12:56.192213 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 00:12:56.192277 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 00:12:56.192340 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 00:12:56.192407 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:12:56.192467 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:12:56.192524 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 00:12:56.192580 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 00:12:56.192659 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:12:56.192718 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 00:12:56.192787 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:12:56.192847 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 00:12:56.192913 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:12:56.192973 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 00:12:56.192984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 00:12:56.193052 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.193115 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.193176 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.193239 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.193300 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.193360 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 00:12:56.193371 kernel: PCI host bridge to bus 000c:00 May 17 00:12:56.193433 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 00:12:56.193490 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 00:12:56.193546 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 00:12:56.193622 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.193694 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.193758 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 00:12:56.193823 kernel: pci 000c:00:01.0: supports D1 D2 May 17 00:12:56.193885 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.193956 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.194019 kernel: pci 000c:00:02.0: supports D1 D2 May 17 00:12:56.194085 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.194157 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.194222 kernel: pci 000c:00:03.0: supports D1 D2 May 17 00:12:56.194284 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.194354 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.194417 kernel: pci 000c:00:04.0: supports D1 D2 May 17 00:12:56.194481 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.194493 kernel: acpiphp: Slot [1-4] registered May 17 00:12:56.194502 kernel: acpiphp: Slot [2-4] registered May 17 00:12:56.194510 kernel: acpiphp: Slot [3-2] registered May 17 00:12:56.194518 kernel: acpiphp: Slot [4-2] registered May 17 00:12:56.194573 kernel: pci_bus 000c:00: on NUMA node 0 May 17 00:12:56.194643 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.194706 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.194773 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.194840 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.194906 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.194970 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.195035 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.195098 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.195161 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.195226 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.195291 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.195355 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.195418 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 00:12:56.195482 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:12:56.195545 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 00:12:56.195831 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:12:56.195901 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 00:12:56.195967 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:12:56.196029 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 00:12:56.196090 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:12:56.196152 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196213 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196275 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196337 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196399 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196462 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196525 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196586 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196652 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196713 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196775 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196836 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.196898 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.196960 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.197024 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.197086 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.197148 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.197210 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 00:12:56.197271 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:12:56.197333 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.197394 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 00:12:56.197459 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:12:56.197521 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.197583 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 00:12:56.197648 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:12:56.197711 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.197772 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 00:12:56.197837 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:12:56.197894 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 00:12:56.197949 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 00:12:56.198015 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 00:12:56.198073 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 00:12:56.198145 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 00:12:56.198204 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 00:12:56.198268 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 00:12:56.198325 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 00:12:56.198390 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 00:12:56.198447 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 00:12:56.198457 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 00:12:56.198526 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.198591 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.198652 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.198711 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.198771 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 00:12:56.198830 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 00:12:56.198841 kernel: PCI host bridge to bus 0002:00 May 17 00:12:56.198904 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 00:12:56.198962 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 00:12:56.199017 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 00:12:56.199086 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.199156 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.199219 kernel: pci 0002:00:01.0: supports D1 D2 May 17 00:12:56.199281 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199350 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.199415 kernel: pci 0002:00:03.0: supports D1 D2 May 17 00:12:56.199477 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199545 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.199611 kernel: pci 0002:00:05.0: supports D1 D2 May 17 00:12:56.199673 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199742 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 00:12:56.199806 kernel: pci 0002:00:07.0: supports D1 D2 May 17 00:12:56.199869 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 00:12:56.199879 kernel: acpiphp: Slot [1-5] registered May 17 00:12:56.199887 kernel: acpiphp: Slot [2-5] registered May 17 00:12:56.199895 kernel: acpiphp: Slot [3-3] registered May 17 00:12:56.199903 kernel: acpiphp: Slot [4-3] registered May 17 00:12:56.199956 kernel: pci_bus 0002:00: on NUMA node 0 May 17 00:12:56.200019 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.200082 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.200146 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 00:12:56.200210 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.200273 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.200335 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.200399 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.200462 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.200524 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.200587 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.200651 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.200714 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.200776 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 00:12:56.200840 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:12:56.200902 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 00:12:56.200965 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:12:56.201027 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 00:12:56.201089 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:12:56.201155 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 00:12:56.201218 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:12:56.201283 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201346 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201409 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201471 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201535 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201600 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201663 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201725 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201791 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201854 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.201916 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.201978 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.202040 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.202102 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.202163 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.202225 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.202286 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.202348 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 00:12:56.202412 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:12:56.202475 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 00:12:56.202544 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 00:12:56.202612 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:12:56.202674 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 00:12:56.202736 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 00:12:56.202805 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:12:56.202868 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 00:12:56.202930 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 00:12:56.202993 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:12:56.203051 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 00:12:56.203107 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 00:12:56.203176 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 00:12:56.203234 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 00:12:56.203299 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 00:12:56.203357 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 00:12:56.203429 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 00:12:56.203487 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 00:12:56.203554 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 00:12:56.203711 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 00:12:56.203724 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 00:12:56.203801 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.203862 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.203922 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.203980 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.204042 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 00:12:56.204101 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 00:12:56.204111 kernel: PCI host bridge to bus 0001:00 May 17 00:12:56.204173 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 00:12:56.204228 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 00:12:56.204282 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 00:12:56.204353 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 00:12:56.204422 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 00:12:56.204485 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 00:12:56.204547 kernel: pci 0001:00:01.0: supports D1 D2 May 17 00:12:56.204613 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.204684 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 00:12:56.204746 kernel: pci 0001:00:02.0: supports D1 D2 May 17 00:12:56.204810 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 00:12:56.204880 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 00:12:56.204943 kernel: pci 0001:00:03.0: supports D1 D2 May 17 00:12:56.205004 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.205074 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 00:12:56.205137 kernel: pci 0001:00:04.0: supports D1 D2 May 17 00:12:56.205202 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 00:12:56.205212 kernel: acpiphp: Slot [1-6] registered May 17 00:12:56.205282 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:12:56.205347 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:12:56.205412 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 00:12:56.205476 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 00:12:56.205541 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:12:56.205618 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:12:56.205688 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:12:56.205752 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 00:12:56.205816 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 00:12:56.205827 kernel: acpiphp: Slot [2-6] registered May 17 00:12:56.205835 kernel: acpiphp: Slot [3-4] registered May 17 00:12:56.205844 kernel: acpiphp: Slot [4-4] registered May 17 00:12:56.205898 kernel: pci_bus 0001:00: on NUMA node 0 May 17 00:12:56.205961 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:12:56.206026 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:12:56.206089 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.206164 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 00:12:56.206229 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.206291 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.206353 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.206417 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.206481 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.206544 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.206610 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:12:56.206673 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 00:12:56.206735 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 00:12:56.206797 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:12:56.206860 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 00:12:56.206925 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:12:56.206987 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 00:12:56.207049 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:12:56.207112 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207174 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207236 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207298 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207362 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207425 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207488 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207549 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207724 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207789 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207851 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.207912 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.207974 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.208038 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.208101 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.208162 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.208227 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 00:12:56.208292 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 00:12:56.208356 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 00:12:56.208420 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 00:12:56.208482 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 00:12:56.208546 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 00:12:56.208611 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:12:56.208674 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 00:12:56.208735 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 00:12:56.208798 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:12:56.208860 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.208924 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 00:12:56.208986 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:12:56.209049 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 00:12:56.209110 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 00:12:56.209172 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:12:56.209229 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 00:12:56.209286 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 00:12:56.209360 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 00:12:56.209418 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 17 00:12:56.209483 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 00:12:56.209542 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 17 00:12:56.209609 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 00:12:56.209667 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 17 00:12:56.209734 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 00:12:56.209791 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 17 00:12:56.209802 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 00:12:56.209870 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:12:56.209931 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 00:12:56.209991 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 00:12:56.210053 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 00:12:56.210113 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 00:12:56.210172 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 00:12:56.210183 kernel: PCI host bridge to bus 0004:00 May 17 00:12:56.210246 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 00:12:56.210300 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 00:12:56.210356 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 00:12:56.210426 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 00:12:56.210497 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 00:12:56.210560 kernel: pci 0004:00:01.0: supports D1 D2 May 17 00:12:56.210627 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 00:12:56.210696 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 00:12:56.210760 kernel: pci 0004:00:03.0: supports D1 D2 May 17 00:12:56.210823 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 00:12:56.210893 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 00:12:56.210957 kernel: pci 0004:00:05.0: supports D1 D2 May 17 00:12:56.211018 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 00:12:56.211089 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:12:56.211154 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 00:12:56.211218 kernel: pci 0004:01:00.0: supports D1 D2 May 17 00:12:56.211283 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:12:56.211361 kernel: pci_bus 0004:02: extended config space not accessible May 17 00:12:56.211435 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:12:56.211501 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 00:12:56.211568 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 00:12:56.211638 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 00:12:56.211704 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 00:12:56.211771 kernel: pci 0004:02:00.0: supports D1 D2 May 17 00:12:56.211840 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:12:56.211913 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 00:12:56.211977 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 00:12:56.212042 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:12:56.212101 kernel: pci_bus 0004:00: on NUMA node 0 May 17 00:12:56.212165 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 00:12:56.212227 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:12:56.212293 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 00:12:56.212357 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:12:56.212420 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:12:56.212482 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.212543 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:12:56.212610 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:12:56.212674 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:12:56.212739 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 00:12:56.212801 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:12:56.212863 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 00:12:56.212925 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:12:56.212988 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213049 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213112 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213174 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213238 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213300 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213362 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213424 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213486 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213548 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213612 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213675 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213739 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 00:12:56.213805 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 00:12:56.213869 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 00:12:56.213936 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 00:12:56.214003 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 00:12:56.214069 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 00:12:56.214136 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 00:12:56.214201 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 00:12:56.214267 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:12:56.214330 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 00:12:56.214392 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 00:12:56.214455 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:12:56.214519 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 00:12:56.214582 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 00:12:56.214648 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 00:12:56.214711 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:12:56.214776 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 00:12:56.214839 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 00:12:56.214900 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:12:56.214957 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:12:56.215012 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 00:12:56.215068 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 00:12:56.215136 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:12:56.215197 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 00:12:56.215258 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 00:12:56.215324 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 00:12:56.215381 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 00:12:56.215446 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 00:12:56.215506 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 00:12:56.215516 kernel: iommu: Default domain type: Translated May 17 00:12:56.215524 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:12:56.215533 kernel: efivars: Registered efivars operations May 17 00:12:56.215601 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 17 00:12:56.215669 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 00:12:56.215736 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 00:12:56.215747 kernel: vgaarb: loaded May 17 00:12:56.215757 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:12:56.215765 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:12:56.215774 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:12:56.215782 kernel: pnp: PnP ACPI init May 17 00:12:56.215851 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 00:12:56.215909 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 00:12:56.215967 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 00:12:56.216025 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 00:12:56.216082 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 00:12:56.216138 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 00:12:56.216197 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 00:12:56.216255 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 00:12:56.216265 kernel: pnp: PnP ACPI: found 1 devices May 17 00:12:56.216273 kernel: NET: Registered PF_INET protocol family May 17 00:12:56.216281 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216291 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:12:56.216300 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:12:56.216308 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:12:56.216317 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216325 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 00:12:56.216333 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216341 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:12:56.216350 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:12:56.216414 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 00:12:56.216428 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 00:12:56.216436 kernel: kvm [1]: GICv3: no GICV resource entry May 17 00:12:56.216444 kernel: kvm [1]: disabling GICv2 emulation May 17 00:12:56.216453 kernel: kvm [1]: GIC system register CPU interface enabled May 17 00:12:56.216461 kernel: kvm [1]: vgic interrupt IRQ9 May 17 00:12:56.216469 kernel: kvm [1]: VHE mode initialized successfully May 17 00:12:56.216477 kernel: Initialise system trusted keyrings May 17 00:12:56.216485 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 00:12:56.216493 kernel: Key type asymmetric registered May 17 00:12:56.216502 kernel: Asymmetric key parser 'x509' registered May 17 00:12:56.216510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:12:56.216518 kernel: io scheduler mq-deadline registered May 17 00:12:56.216526 kernel: io scheduler kyber registered May 17 00:12:56.216535 kernel: io scheduler bfq registered May 17 00:12:56.216543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:12:56.216551 kernel: ACPI: button: Power Button [PWRB] May 17 00:12:56.216559 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 00:12:56.216567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:12:56.216641 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 00:12:56.216701 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.216760 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.216817 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 00:12:56.216876 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 17 00:12:56.216933 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 17 00:12:56.217002 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 00:12:56.217061 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.217119 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.217176 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 00:12:56.217234 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 17 00:12:56.217292 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 17 00:12:56.217357 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 00:12:56.217418 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.217476 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.217534 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 00:12:56.217594 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 17 00:12:56.217653 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 17 00:12:56.217718 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 00:12:56.217778 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.217836 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.217894 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 00:12:56.217952 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 17 00:12:56.218010 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 17 00:12:56.218081 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 00:12:56.218140 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.218200 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.218259 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 00:12:56.218317 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 17 00:12:56.218375 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 17 00:12:56.218442 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 00:12:56.218501 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.218560 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.218621 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 00:12:56.218680 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 17 00:12:56.218737 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 17 00:12:56.218802 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 00:12:56.218860 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.218918 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.218980 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 00:12:56.219039 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 17 00:12:56.219097 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 17 00:12:56.219162 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 00:12:56.219220 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 00:12:56.219278 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 00:12:56.219339 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 00:12:56.219397 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 17 00:12:56.219455 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 17 00:12:56.219466 kernel: thunder_xcv, ver 1.0 May 17 00:12:56.219474 kernel: thunder_bgx, ver 1.0 May 17 00:12:56.219482 kernel: nicpf, ver 1.0 May 17 00:12:56.219490 kernel: nicvf, ver 1.0 May 17 00:12:56.219552 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:12:56.219616 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:12:54 UTC (1747440774) May 17 00:12:56.219627 kernel: efifb: probing for efifb May 17 00:12:56.219635 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 00:12:56.219644 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:12:56.219652 kernel: efifb: scrolling: redraw May 17 00:12:56.219660 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:12:56.219668 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:12:56.219676 kernel: fb0: EFI VGA frame buffer device May 17 00:12:56.219686 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 00:12:56.219694 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:12:56.219703 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:12:56.219711 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:12:56.219719 kernel: watchdog: Hard watchdog permanently disabled May 17 00:12:56.219727 kernel: NET: Registered PF_INET6 protocol family May 17 00:12:56.219737 kernel: Segment Routing with IPv6 May 17 00:12:56.219745 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:12:56.219753 kernel: NET: Registered PF_PACKET protocol family May 17 00:12:56.219763 kernel: Key type dns_resolver registered May 17 00:12:56.219771 kernel: registered taskstats version 1 May 17 00:12:56.219779 kernel: Loading compiled-in X.509 certificates May 17 00:12:56.219787 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:12:56.219795 kernel: Key type .fscrypt registered May 17 00:12:56.219803 kernel: Key type fscrypt-provisioning registered May 17 00:12:56.219811 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:12:56.219819 kernel: ima: Allocated hash algorithm: sha1 May 17 00:12:56.219827 kernel: ima: No architecture policies found May 17 00:12:56.219835 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:12:56.219901 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 00:12:56.219966 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 00:12:56.220031 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 00:12:56.220095 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 00:12:56.220160 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 00:12:56.220224 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 00:12:56.220288 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 00:12:56.220352 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 00:12:56.220419 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 00:12:56.220485 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 00:12:56.220549 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 00:12:56.220616 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 00:12:56.220681 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 00:12:56.220744 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 00:12:56.220810 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 00:12:56.220873 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 00:12:56.220941 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 00:12:56.221004 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 00:12:56.221069 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 00:12:56.221132 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 00:12:56.221197 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 00:12:56.221260 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 00:12:56.221325 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 00:12:56.221390 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 00:12:56.221459 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 00:12:56.221522 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 00:12:56.221590 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 00:12:56.221655 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 00:12:56.221719 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 00:12:56.221783 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 00:12:56.221848 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 00:12:56.221912 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 00:12:56.221976 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 00:12:56.222042 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 00:12:56.222106 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 00:12:56.222171 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 00:12:56.222234 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 00:12:56.222298 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 00:12:56.222363 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 00:12:56.222427 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 00:12:56.222491 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 00:12:56.222557 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 00:12:56.222627 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 00:12:56.222692 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 00:12:56.222758 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 00:12:56.222821 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 00:12:56.222886 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 00:12:56.222949 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 00:12:56.223015 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 00:12:56.223081 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 00:12:56.223146 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 00:12:56.223208 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 00:12:56.223273 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 00:12:56.223336 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 00:12:56.223401 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 00:12:56.223464 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 00:12:56.223529 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 00:12:56.223597 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 00:12:56.223662 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 00:12:56.223727 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 00:12:56.223793 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 00:12:56.223804 kernel: clk: Disabling unused clocks May 17 00:12:56.223813 kernel: Freeing unused kernel memory: 39424K May 17 00:12:56.223821 kernel: Run /init as init process May 17 00:12:56.223829 kernel: with arguments: May 17 00:12:56.223839 kernel: /init May 17 00:12:56.223846 kernel: with environment: May 17 00:12:56.223854 kernel: HOME=/ May 17 00:12:56.223862 kernel: TERM=linux May 17 00:12:56.223870 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:12:56.223880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:12:56.223891 systemd[1]: Detected architecture arm64. May 17 00:12:56.223899 systemd[1]: Running in initrd. May 17 00:12:56.223909 systemd[1]: No hostname configured, using default hostname. May 17 00:12:56.223917 systemd[1]: Hostname set to . May 17 00:12:56.223925 systemd[1]: Initializing machine ID from random generator. May 17 00:12:56.223934 systemd[1]: Queued start job for default target initrd.target. May 17 00:12:56.223943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:12:56.223951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:12:56.223960 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:12:56.223969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:12:56.223979 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:12:56.223988 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:12:56.223997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:12:56.224006 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:12:56.224015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:12:56.224023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:12:56.224032 systemd[1]: Reached target paths.target - Path Units. May 17 00:12:56.224042 systemd[1]: Reached target slices.target - Slice Units. May 17 00:12:56.224050 systemd[1]: Reached target swap.target - Swaps. May 17 00:12:56.224059 systemd[1]: Reached target timers.target - Timer Units. May 17 00:12:56.224067 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:12:56.224075 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:12:56.224084 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:12:56.224092 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:12:56.224101 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:12:56.224111 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:12:56.224119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:12:56.224128 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:12:56.224136 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:12:56.224145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:12:56.224153 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:12:56.224162 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:12:56.224170 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:12:56.224179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:12:56.224208 systemd-journald[898]: Collecting audit messages is disabled. May 17 00:12:56.224228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:56.224237 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:12:56.224245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:12:56.224255 kernel: Bridge firewalling registered May 17 00:12:56.224264 systemd-journald[898]: Journal started May 17 00:12:56.224283 systemd-journald[898]: Runtime Journal (/run/log/journal/501be0719bdb4ce09b7001a7f462581c) is 8.0M, max 4.0G, 3.9G free. May 17 00:12:56.183783 systemd-modules-load[900]: Inserted module 'overlay' May 17 00:12:56.264304 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:12:56.206341 systemd-modules-load[900]: Inserted module 'br_netfilter' May 17 00:12:56.270016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:12:56.280963 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:12:56.291908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:12:56.302691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:56.332724 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:56.339009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:12:56.357041 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:12:56.368458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:12:56.385245 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:56.401722 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:56.418474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:12:56.429968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:12:56.459693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:12:56.473090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:12:56.481730 dracut-cmdline[946]: dracut-dracut-053 May 17 00:12:56.492849 dracut-cmdline[946]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:12:56.486902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:12:56.501114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:12:56.510998 systemd-resolved[952]: Positive Trust Anchors: May 17 00:12:56.511008 systemd-resolved[952]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:12:56.511040 systemd-resolved[952]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:12:56.526257 systemd-resolved[952]: Defaulting to hostname 'linux'. May 17 00:12:56.539058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:12:56.558657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:12:56.663262 kernel: SCSI subsystem initialized May 17 00:12:56.674596 kernel: Loading iSCSI transport class v2.0-870. May 17 00:12:56.693596 kernel: iscsi: registered transport (tcp) May 17 00:12:56.721129 kernel: iscsi: registered transport (qla4xxx) May 17 00:12:56.721151 kernel: QLogic iSCSI HBA Driver May 17 00:12:56.764570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:12:56.783713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:12:56.828910 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:12:56.828930 kernel: device-mapper: uevent: version 1.0.3 May 17 00:12:56.847595 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:12:56.904599 kernel: raid6: neonx8 gen() 15849 MB/s May 17 00:12:56.930597 kernel: raid6: neonx4 gen() 15714 MB/s May 17 00:12:56.955597 kernel: raid6: neonx2 gen() 13274 MB/s May 17 00:12:56.980597 kernel: raid6: neonx1 gen() 10526 MB/s May 17 00:12:57.005597 kernel: raid6: int64x8 gen() 7000 MB/s May 17 00:12:57.030597 kernel: raid6: int64x4 gen() 7375 MB/s May 17 00:12:57.055597 kernel: raid6: int64x2 gen() 6153 MB/s May 17 00:12:57.083634 kernel: raid6: int64x1 gen() 5077 MB/s May 17 00:12:57.083655 kernel: raid6: using algorithm neonx8 gen() 15849 MB/s May 17 00:12:57.118051 kernel: raid6: .... xor() 11973 MB/s, rmw enabled May 17 00:12:57.118072 kernel: raid6: using neon recovery algorithm May 17 00:12:57.141190 kernel: xor: measuring software checksum speed May 17 00:12:57.141215 kernel: 8regs : 19769 MB/sec May 17 00:12:57.153593 kernel: 32regs : 19308 MB/sec May 17 00:12:57.164598 kernel: arm64_neon : 26518 MB/sec May 17 00:12:57.164619 kernel: xor: using function: arm64_neon (26518 MB/sec) May 17 00:12:57.225599 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:12:57.235961 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:12:57.257709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:12:57.270876 systemd-udevd[1143]: Using default interface naming scheme 'v255'. May 17 00:12:57.273915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:12:57.291741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:12:57.305995 dracut-pre-trigger[1153]: rd.md=0: removing MD RAID activation May 17 00:12:57.332414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:12:57.354708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:12:57.460336 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:12:57.481762 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:12:57.653484 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:12:57.653507 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:12:57.653521 kernel: ACPI: bus type USB registered May 17 00:12:57.653531 kernel: usbcore: registered new interface driver usbfs May 17 00:12:57.653541 kernel: usbcore: registered new interface driver hub May 17 00:12:57.653550 kernel: usbcore: registered new device driver usb May 17 00:12:57.653560 kernel: PTP clock support registered May 17 00:12:57.653570 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 May 17 00:12:57.653726 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:12:57.653810 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 00:12:57.653891 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 00:12:57.653972 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:12:57.653982 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:12:57.653992 kernel: igb 0003:03:00.0: Adding to iommu group 32 May 17 00:12:57.654079 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 May 17 00:12:57.557022 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:12:57.688076 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 17 00:12:57.688193 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 17 00:12:57.557085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:57.682476 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:57.693773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:12:57.693843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:57.712565 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:57.737680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:12:57.745615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:12:57.760606 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:12:57.965750 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 17 00:12:57.965976 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 00:12:57.966058 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 00:12:57.966135 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 00:12:57.966210 kernel: hub 1-0:1.0: USB hub found May 17 00:12:57.966306 kernel: hub 1-0:1.0: 4 ports detected May 17 00:12:57.966383 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:12:57.966470 kernel: hub 2-0:1.0: USB hub found May 17 00:12:57.966555 kernel: hub 2-0:1.0: 4 ports detected May 17 00:12:57.966638 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 00:12:57.966724 kernel: mlx5_core 0001:01:00.0: firmware version: 14.30.1004 May 17 00:12:57.966810 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:12:57.966887 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 00:12:57.770698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:12:57.784701 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:12:57.924720 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:12:58.068643 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 00:12:58.068854 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:12:58.068939 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6c May 17 00:12:58.069014 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 00:12:58.069088 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:12:58.069163 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 17 00:12:58.069244 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 00:12:57.979732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:12:58.088549 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 00:12:58.000706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:12:58.234014 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 00:12:58.234151 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:12:58.234164 kernel: GPT:9289727 != 1875385007 May 17 00:12:58.234180 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:12:58.234189 kernel: GPT:9289727 != 1875385007 May 17 00:12:58.234199 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:12:58.234208 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:12:58.234218 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 00:12:58.234299 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 00:12:58.234389 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1200) May 17 00:12:58.234400 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (1219) May 17 00:12:58.234412 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 00:12:58.234490 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6d May 17 00:12:58.234566 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 00:12:58.073792 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:12:58.305849 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 00:12:58.305973 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 00:12:58.306051 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 00:12:58.149624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 17 00:12:58.330546 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 00:12:58.301372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:12:58.330898 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 17 00:12:58.350298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 00:12:58.365328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:12:58.405008 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 00:12:58.377519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 17 00:12:58.425737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:12:58.453686 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:12:58.453701 disk-uuid[1315]: Primary Header is updated. May 17 00:12:58.453701 disk-uuid[1315]: Secondary Entries is updated. May 17 00:12:58.453701 disk-uuid[1315]: Secondary Header is updated. May 17 00:12:58.530465 kernel: hub 1-3:1.0: USB hub found May 17 00:12:58.530727 kernel: hub 1-3:1.0: 4 ports detected May 17 00:12:58.592600 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:12:58.605593 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 00:12:58.628809 kernel: mlx5_core 0001:01:00.1: firmware version: 14.30.1004 May 17 00:12:58.628961 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:12:58.657594 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 00:12:58.700084 kernel: hub 2-3:1.0: USB hub found May 17 00:12:58.700276 kernel: hub 2-3:1.0: 4 ports detected May 17 00:12:58.928220 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 00:12:59.211600 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:12:59.226595 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 00:12:59.244595 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 00:12:59.452345 disk-uuid[1316]: The operation has completed successfully. May 17 00:12:59.457988 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:12:59.473552 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:12:59.473639 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:12:59.517695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:12:59.528038 sh[1482]: Success May 17 00:12:59.546603 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:12:59.579906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:12:59.599744 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:12:59.610037 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:12:59.615593 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:12:59.615613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:59.615623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:12:59.615634 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:12:59.615644 kernel: BTRFS info (device dm-0): using free space tree May 17 00:12:59.618592 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:12:59.704628 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:12:59.711300 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:12:59.722762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:12:59.803708 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:59.803722 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:12:59.803732 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:12:59.803742 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:12:59.803751 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:12:59.731570 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:12:59.842126 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:12:59.831982 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:12:59.866703 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:12:59.913663 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:12:59.934269 ignition[1591]: Ignition 2.19.0 May 17 00:12:59.934276 ignition[1591]: Stage: fetch-offline May 17 00:12:59.938021 unknown[1591]: fetched base config from "system" May 17 00:12:59.934321 ignition[1591]: no configs at "/usr/lib/ignition/base.d" May 17 00:12:59.938028 unknown[1591]: fetched user config from "system" May 17 00:12:59.934329 ignition[1591]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:12:59.945752 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:12:59.934474 ignition[1591]: parsed url from cmdline: "" May 17 00:12:59.956722 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:12:59.934478 ignition[1591]: no config URL provided May 17 00:12:59.968905 systemd-networkd[1714]: lo: Link UP May 17 00:12:59.934482 ignition[1591]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:12:59.968908 systemd-networkd[1714]: lo: Gained carrier May 17 00:12:59.934534 ignition[1591]: parsing config with SHA512: 818ac516e6477d480d76b184d69e9bf370c42d49acd8c286ed77f0ebc672e798ddd7a66bfac932d56bf402354122cd19e0e910e6813420b97e10863b00e186da May 17 00:12:59.972422 systemd-networkd[1714]: Enumeration completed May 17 00:12:59.939873 ignition[1591]: fetch-offline: fetch-offline passed May 17 00:12:59.972606 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:12:59.939879 ignition[1591]: POST message to Packet Timeline May 17 00:12:59.973565 systemd-networkd[1714]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:12:59.939884 ignition[1591]: POST Status error: resource requires networking May 17 00:12:59.978459 systemd[1]: Reached target network.target - Network. May 17 00:12:59.939964 ignition[1591]: Ignition finished successfully May 17 00:12:59.988559 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:13:00.026205 ignition[1723]: Ignition 2.19.0 May 17 00:13:00.001785 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:13:00.026211 ignition[1723]: Stage: kargs May 17 00:13:00.025483 systemd-networkd[1714]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:00.026433 ignition[1723]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:00.077188 systemd-networkd[1714]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:00.026442 ignition[1723]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:00.027445 ignition[1723]: kargs: kargs passed May 17 00:13:00.027449 ignition[1723]: POST message to Packet Timeline May 17 00:13:00.027461 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:00.029972 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47133->[::1]:53: read: connection refused May 17 00:13:00.230085 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:13:00.230485 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51606->[::1]:53: read: connection refused May 17 00:13:00.608603 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 00:13:00.611267 systemd-networkd[1714]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:13:00.630640 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:13:00.630974 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49118->[::1]:53: read: connection refused May 17 00:13:01.206601 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 00:13:01.209243 systemd-networkd[1714]: eno1: Link UP May 17 00:13:01.209372 systemd-networkd[1714]: eno2: Link UP May 17 00:13:01.209488 systemd-networkd[1714]: enP1p1s0f0np0: Link UP May 17 00:13:01.209585 systemd-networkd[1714]: enP1p1s0f0np0: Gained carrier May 17 00:13:01.220872 systemd-networkd[1714]: enP1p1s0f1np1: Link UP May 17 00:13:01.252621 systemd-networkd[1714]: enP1p1s0f0np0: DHCPv4 address 147.28.151.230/30, gateway 147.28.151.229 acquired from 147.28.144.140 May 17 00:13:01.431828 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:13:01.432525 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54497->[::1]:53: read: connection refused May 17 00:13:01.616031 systemd-networkd[1714]: enP1p1s0f1np1: Gained carrier May 17 00:13:02.215816 systemd-networkd[1714]: enP1p1s0f0np0: Gained IPv6LL May 17 00:13:02.983793 systemd-networkd[1714]: enP1p1s0f1np1: Gained IPv6LL May 17 00:13:03.034135 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:13:03.034702 ignition[1723]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49755->[::1]:53: read: connection refused May 17 00:13:06.237759 ignition[1723]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:13:07.380557 ignition[1723]: GET result: OK May 17 00:13:08.272388 ignition[1723]: Ignition finished successfully May 17 00:13:08.275950 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:13:08.286706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:13:08.303014 ignition[1747]: Ignition 2.19.0 May 17 00:13:08.303021 ignition[1747]: Stage: disks May 17 00:13:08.303208 ignition[1747]: no configs at "/usr/lib/ignition/base.d" May 17 00:13:08.303217 ignition[1747]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:08.304292 ignition[1747]: disks: disks passed May 17 00:13:08.304296 ignition[1747]: POST message to Packet Timeline May 17 00:13:08.304309 ignition[1747]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:08.816809 ignition[1747]: GET result: OK May 17 00:13:09.191846 ignition[1747]: Ignition finished successfully May 17 00:13:09.193922 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:13:09.200298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:13:09.207858 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:13:09.215789 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:13:09.224314 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:13:09.233100 systemd[1]: Reached target basic.target - Basic System. May 17 00:13:09.250736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:13:09.266056 systemd-fsck[1770]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:13:09.269663 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:13:09.289687 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:13:09.354508 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:13:09.359446 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:13:09.364727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:13:09.386672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:09.394593 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1780) May 17 00:13:09.394610 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:09.394621 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:09.394631 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:09.395593 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:09.395603 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:09.489667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:13:09.496062 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:13:09.507453 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:13:09.522723 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:13:09.522752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:09.535867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:09.566507 coreos-metadata[1800]: May 17 00:13:09.551 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:09.585641 coreos-metadata[1798]: May 17 00:13:09.551 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:09.549863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:13:09.574705 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:13:09.613858 initrd-setup-root[1819]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:13:09.619869 initrd-setup-root[1826]: cut: /sysroot/etc/group: No such file or directory May 17 00:13:09.625785 initrd-setup-root[1833]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:13:09.631726 initrd-setup-root[1840]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:13:09.700409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:13:09.722692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:13:09.734511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:13:09.759900 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:09.765791 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:13:09.786808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:13:09.803036 ignition[1913]: INFO : Ignition 2.19.0 May 17 00:13:09.803036 ignition[1913]: INFO : Stage: mount May 17 00:13:09.814031 ignition[1913]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:09.814031 ignition[1913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:09.814031 ignition[1913]: INFO : mount: mount passed May 17 00:13:09.814031 ignition[1913]: INFO : POST message to Packet Timeline May 17 00:13:09.814031 ignition[1913]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:09.983814 coreos-metadata[1798]: May 17 00:13:09.983 INFO Fetch successful May 17 00:13:10.028876 coreos-metadata[1798]: May 17 00:13:10.028 INFO wrote hostname ci-4081.3.3-n-3bfd76e738 to /sysroot/etc/hostname May 17 00:13:10.032021 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:10.220030 coreos-metadata[1800]: May 17 00:13:10.220 INFO Fetch successful May 17 00:13:10.268552 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:13:10.268722 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:13:10.312714 ignition[1913]: INFO : GET result: OK May 17 00:13:10.608789 ignition[1913]: INFO : Ignition finished successfully May 17 00:13:10.611008 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:13:10.633693 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:13:10.645982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:13:10.681448 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1937) May 17 00:13:10.681485 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:13:10.695877 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:13:10.708925 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:13:10.731863 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:13:10.731885 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 17 00:13:10.739959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:13:10.768964 ignition[1954]: INFO : Ignition 2.19.0 May 17 00:13:10.768964 ignition[1954]: INFO : Stage: files May 17 00:13:10.778463 ignition[1954]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:10.778463 ignition[1954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:10.778463 ignition[1954]: DEBUG : files: compiled without relabeling support, skipping May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:13:10.778463 ignition[1954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:13:10.778463 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:13:10.778463 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:13:10.774416 unknown[1954]: wrote ssh authorized keys file for user: core May 17 00:13:11.698056 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:13:12.743038 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:12.753880 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:13:13.145730 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:13:13.458731 ignition[1954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:13:13.458731 ignition[1954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:13:13.483558 ignition[1954]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:13.483558 ignition[1954]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:13:13.483558 ignition[1954]: INFO : files: files passed May 17 00:13:13.483558 ignition[1954]: INFO : POST message to Packet Timeline May 17 00:13:13.483558 ignition[1954]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:13.986930 ignition[1954]: INFO : GET result: OK May 17 00:13:14.276585 ignition[1954]: INFO : Ignition finished successfully May 17 00:13:14.278954 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:13:14.300718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:13:14.313258 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:13:14.332055 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:13:14.332133 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:13:14.350549 initrd-setup-root-after-ignition[1999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:14.350549 initrd-setup-root-after-ignition[1999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:14.345035 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:14.403333 initrd-setup-root-after-ignition[2004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:13:14.358171 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:13:14.383791 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:13:14.417675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:13:14.417767 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:13:14.427667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:13:14.443919 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:13:14.455529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:13:14.470768 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:13:14.492857 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:14.513749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:13:14.528804 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:14.538415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:14.549991 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:13:14.561612 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:13:14.561714 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:13:14.573369 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:13:14.584680 systemd[1]: Stopped target basic.target - Basic System. May 17 00:13:14.596222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:13:14.607779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:13:14.619144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:13:14.630540 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:13:14.641875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:13:14.653273 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:13:14.664747 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:13:14.681774 systemd[1]: Stopped target swap.target - Swaps. May 17 00:13:14.693094 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:13:14.693193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:13:14.704661 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:14.715772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:14.727098 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:13:14.731626 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:14.738496 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:13:14.738600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:13:14.750056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:13:14.750176 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:13:14.761438 systemd[1]: Stopped target paths.target - Path Units. May 17 00:13:14.772673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:13:14.772767 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:14.790011 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:13:14.801515 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:13:14.813097 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:13:14.813192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:13:14.824719 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:13:14.824810 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:13:14.938109 ignition[2026]: INFO : Ignition 2.19.0 May 17 00:13:14.938109 ignition[2026]: INFO : Stage: umount May 17 00:13:14.938109 ignition[2026]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:13:14.938109 ignition[2026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:13:14.938109 ignition[2026]: INFO : umount: umount passed May 17 00:13:14.938109 ignition[2026]: INFO : POST message to Packet Timeline May 17 00:13:14.938109 ignition[2026]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:13:14.836491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:13:14.836581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:13:14.848162 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:13:14.848247 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:13:14.859840 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:13:14.859923 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:13:14.887798 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:13:14.896384 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:13:14.908496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:13:14.908612 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:14.920789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:13:14.920879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:13:14.934621 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:13:14.935529 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:13:14.935615 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:13:14.945565 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:13:14.945726 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:13:15.509895 ignition[2026]: INFO : GET result: OK May 17 00:13:15.834431 ignition[2026]: INFO : Ignition finished successfully May 17 00:13:15.837291 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:13:15.837441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:13:15.844469 systemd[1]: Stopped target network.target - Network. May 17 00:13:15.853470 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:13:15.853529 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:13:15.862975 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:13:15.863046 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:13:15.872378 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:13:15.872416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:13:15.881773 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:13:15.881805 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:13:15.891508 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:13:15.891536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:13:15.901331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:13:15.906607 systemd-networkd[1714]: enP1p1s0f0np0: DHCPv6 lease lost May 17 00:13:15.911014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:13:15.915745 systemd-networkd[1714]: enP1p1s0f1np1: DHCPv6 lease lost May 17 00:13:15.922209 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:13:15.922460 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:13:15.933278 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:13:15.934679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:13:15.942482 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:13:15.942678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:15.964728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:13:15.970570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:13:15.970626 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:13:15.980610 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:13:15.980645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:15.990689 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:13:15.990719 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:13:16.000958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:13:16.000989 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:16.011397 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:16.035951 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:13:16.036075 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:16.045212 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:13:16.045357 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:13:16.054285 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:13:16.054321 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:16.064911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:13:16.064948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:13:16.081013 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:13:16.081055 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:13:16.091699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:13:16.091735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:13:16.115772 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:13:16.124870 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:13:16.124933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:16.135879 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:13:16.135909 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:13:16.152314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:13:16.152347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:16.169662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:13:16.169708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:16.181661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:13:16.181732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:13:16.688810 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:13:16.689703 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:13:16.700425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:13:16.722700 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:13:16.736318 systemd[1]: Switching root. May 17 00:13:16.797122 systemd-journald[898]: Journal stopped May 17 00:13:18.803383 systemd-journald[898]: Received SIGTERM from PID 1 (systemd). May 17 00:13:18.803414 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:13:18.803425 kernel: SELinux: policy capability open_perms=1 May 17 00:13:18.803433 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:13:18.803441 kernel: SELinux: policy capability always_check_network=0 May 17 00:13:18.803448 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:13:18.803457 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:13:18.803467 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:13:18.803475 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:13:18.803483 kernel: audit: type=1403 audit(1747440796.990:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:13:18.803492 systemd[1]: Successfully loaded SELinux policy in 114.339ms. May 17 00:13:18.803502 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.649ms. May 17 00:13:18.803512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:13:18.803521 systemd[1]: Detected architecture arm64. May 17 00:13:18.803532 systemd[1]: Detected first boot. May 17 00:13:18.803541 systemd[1]: Hostname set to . May 17 00:13:18.803551 systemd[1]: Initializing machine ID from random generator. May 17 00:13:18.803560 zram_generator::config[2092]: No configuration found. May 17 00:13:18.803570 systemd[1]: Populated /etc with preset unit settings. May 17 00:13:18.803580 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:13:18.803592 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:13:18.803601 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:13:18.803611 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:13:18.803620 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:13:18.803629 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:13:18.803639 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:13:18.803652 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:13:18.803662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:13:18.803671 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:13:18.803680 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:13:18.803689 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:13:18.803699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:13:18.803708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:13:18.803719 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:13:18.803729 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:13:18.803738 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:13:18.803747 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:13:18.803756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:13:18.803765 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:13:18.803775 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:13:18.803786 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:13:18.803795 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:13:18.803806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:13:18.803816 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:13:18.803825 systemd[1]: Reached target slices.target - Slice Units. May 17 00:13:18.803834 systemd[1]: Reached target swap.target - Swaps. May 17 00:13:18.803844 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:13:18.803853 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:13:18.803862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:13:18.803873 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:13:18.803883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:13:18.803893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:13:18.803902 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:13:18.803912 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:13:18.803923 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:13:18.803933 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:13:18.803942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:13:18.803952 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:13:18.803962 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:13:18.803972 systemd[1]: Reached target machines.target - Containers. May 17 00:13:18.803981 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:13:18.803991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:13:18.804002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:13:18.804011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:13:18.804021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:13:18.804030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:13:18.804040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:13:18.804049 kernel: ACPI: bus type drm_connector registered May 17 00:13:18.804059 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:13:18.804068 kernel: fuse: init (API version 7.39) May 17 00:13:18.804077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:13:18.804088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:13:18.804098 kernel: loop: module loaded May 17 00:13:18.804107 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:13:18.804116 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:13:18.804126 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:13:18.804135 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:13:18.804145 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:13:18.804155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:13:18.804185 systemd-journald[2203]: Collecting audit messages is disabled. May 17 00:13:18.804205 systemd-journald[2203]: Journal started May 17 00:13:18.804226 systemd-journald[2203]: Runtime Journal (/run/log/journal/54429197103543d7ad81f4497242c348) is 8.0M, max 4.0G, 3.9G free. May 17 00:13:17.511713 systemd[1]: Queued start job for default target multi-user.target. May 17 00:13:17.532774 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:13:17.533069 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:13:17.533349 systemd[1]: systemd-journald.service: Consumed 3.487s CPU time. May 17 00:13:18.828602 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:13:18.855603 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:13:18.876604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:13:18.899379 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:13:18.899421 systemd[1]: Stopped verity-setup.service. May 17 00:13:18.923608 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:13:18.929505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:13:18.934983 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:13:18.940396 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:13:18.945779 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:13:18.951112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:13:18.956331 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:13:18.961729 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:13:18.967151 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:13:18.972531 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:13:18.972813 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:13:18.980114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:13:18.980261 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:13:18.985690 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:13:18.985843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:13:18.991069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:13:18.992678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:13:18.997878 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:13:18.998019 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:13:19.003085 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:13:19.003217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:13:19.008208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:13:19.013286 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:13:19.018405 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:13:19.024615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:13:19.040735 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:13:19.060755 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:13:19.066824 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:13:19.071730 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:13:19.071765 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:13:19.077357 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:13:19.091707 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:13:19.097527 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:13:19.102380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:13:19.103813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:13:19.109577 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:13:19.115700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:13:19.116824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:13:19.121147 systemd-journald[2203]: Time spent on flushing to /var/log/journal/54429197103543d7ad81f4497242c348 is 38.728ms for 2344 entries. May 17 00:13:19.121147 systemd-journald[2203]: System Journal (/var/log/journal/54429197103543d7ad81f4497242c348) is 8.0M, max 195.6M, 187.6M free. May 17 00:13:19.169941 systemd-journald[2203]: Received client request to flush runtime journal. May 17 00:13:19.170031 kernel: loop0: detected capacity change from 0 to 203944 May 17 00:13:19.170114 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:13:19.133716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:13:19.134893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:13:19.140829 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:13:19.146691 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:13:19.152541 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:13:19.179248 systemd-tmpfiles[2241]: ACLs are not supported, ignoring. May 17 00:13:19.179261 systemd-tmpfiles[2241]: ACLs are not supported, ignoring. May 17 00:13:19.190978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:13:19.195719 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:13:19.201621 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:13:19.206361 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:13:19.211136 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:13:19.216097 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:13:19.221576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:13:19.222595 kernel: loop1: detected capacity change from 0 to 114328 May 17 00:13:19.242687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:13:19.261886 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:13:19.268022 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:13:19.272594 kernel: loop2: detected capacity change from 0 to 114432 May 17 00:13:19.284132 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:13:19.284897 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:13:19.290912 udevadm[2243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:13:19.303227 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:13:19.319857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:13:19.328597 kernel: loop3: detected capacity change from 0 to 8 May 17 00:13:19.338867 systemd-tmpfiles[2277]: ACLs are not supported, ignoring. May 17 00:13:19.338879 systemd-tmpfiles[2277]: ACLs are not supported, ignoring. May 17 00:13:19.342350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:13:19.363993 ldconfig[2232]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:13:19.366393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:13:19.375598 kernel: loop4: detected capacity change from 0 to 203944 May 17 00:13:19.395603 kernel: loop5: detected capacity change from 0 to 114328 May 17 00:13:19.410600 kernel: loop6: detected capacity change from 0 to 114432 May 17 00:13:19.425602 kernel: loop7: detected capacity change from 0 to 8 May 17 00:13:19.425839 (sd-merge)[2284]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 17 00:13:19.426248 (sd-merge)[2284]: Merged extensions into '/usr'. May 17 00:13:19.429243 systemd[1]: Reloading requested from client PID 2237 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:13:19.429258 systemd[1]: Reloading... May 17 00:13:19.471599 zram_generator::config[2313]: No configuration found. May 17 00:13:19.565897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:13:19.614461 systemd[1]: Reloading finished in 184 ms. May 17 00:13:19.640694 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:13:19.645756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:13:19.662849 systemd[1]: Starting ensure-sysext.service... May 17 00:13:19.668800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:13:19.675761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:13:19.682709 systemd[1]: Reloading requested from client PID 2365 ('systemctl') (unit ensure-sysext.service)... May 17 00:13:19.682721 systemd[1]: Reloading... May 17 00:13:19.689086 systemd-tmpfiles[2366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:13:19.689338 systemd-tmpfiles[2366]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:13:19.689956 systemd-tmpfiles[2366]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:13:19.690161 systemd-tmpfiles[2366]: ACLs are not supported, ignoring. May 17 00:13:19.690207 systemd-tmpfiles[2366]: ACLs are not supported, ignoring. May 17 00:13:19.692582 systemd-tmpfiles[2366]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:13:19.692596 systemd-tmpfiles[2366]: Skipping /boot May 17 00:13:19.699678 systemd-tmpfiles[2366]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:13:19.699686 systemd-tmpfiles[2366]: Skipping /boot May 17 00:13:19.701176 systemd-udevd[2367]: Using default interface naming scheme 'v255'. May 17 00:13:19.724595 zram_generator::config[2396]: No configuration found. May 17 00:13:19.757597 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2439) May 17 00:13:19.770606 kernel: IPMI message handler: version 39.2 May 17 00:13:19.780601 kernel: ipmi device interface May 17 00:13:19.792595 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 00:13:19.792628 kernel: ipmi_si: IPMI System Interface driver May 17 00:13:19.805846 kernel: ipmi_si: Unable to find any System Interface(s) May 17 00:13:19.840220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:13:19.903860 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:13:19.904017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 17 00:13:19.908925 systemd[1]: Reloading finished in 225 ms. May 17 00:13:19.925242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:13:19.943942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:13:19.965002 systemd[1]: Finished ensure-sysext.service. May 17 00:13:19.969952 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:13:20.000752 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:13:20.007067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:13:20.012394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:13:20.013487 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:13:20.019544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:13:20.025483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:13:20.031449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:13:20.031783 lvm[2545]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:13:20.037101 augenrules[2559]: No rules May 17 00:13:20.037348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:13:20.042291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:13:20.043221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:13:20.049201 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:13:20.055799 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:13:20.062665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:13:20.068907 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:13:20.074679 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:13:20.080366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:13:20.085834 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:13:20.091429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:13:20.096431 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:13:20.101314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:13:20.102062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:13:20.107749 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:13:20.107887 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:13:20.112606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:13:20.112754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:13:20.117467 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:13:20.117587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:13:20.122494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:13:20.127396 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:13:20.134274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:13:20.145776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:13:20.173859 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:13:20.178171 lvm[2592]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:13:20.178381 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:13:20.178447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:13:20.179689 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:13:20.186220 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:13:20.190958 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:13:20.191369 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:13:20.196240 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:13:20.219552 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:13:20.224915 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:13:20.274597 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:13:20.279852 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:13:20.284538 systemd-resolved[2570]: Positive Trust Anchors: May 17 00:13:20.284552 systemd-resolved[2570]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:13:20.284584 systemd-resolved[2570]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:13:20.287892 systemd-networkd[2569]: lo: Link UP May 17 00:13:20.287898 systemd-networkd[2569]: lo: Gained carrier May 17 00:13:20.288022 systemd-resolved[2570]: Using system hostname 'ci-4081.3.3-n-3bfd76e738'. May 17 00:13:20.289413 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:13:20.291606 systemd-networkd[2569]: bond0: netdev ready May 17 00:13:20.293902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:13:20.298313 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:13:20.300684 systemd-networkd[2569]: Enumeration completed May 17 00:13:20.302672 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:13:20.307001 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:13:20.311522 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:13:20.311799 systemd-networkd[2569]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:06:d8.network. May 17 00:13:20.315918 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:13:20.320322 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:13:20.324750 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:13:20.324770 systemd[1]: Reached target paths.target - Path Units. May 17 00:13:20.329159 systemd[1]: Reached target timers.target - Timer Units. May 17 00:13:20.334162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:13:20.339991 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:13:20.347692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:13:20.352637 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:13:20.357539 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:13:20.362131 systemd[1]: Reached target network.target - Network. May 17 00:13:20.366641 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:13:20.370943 systemd[1]: Reached target basic.target - Basic System. May 17 00:13:20.375192 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:13:20.375214 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:13:20.386692 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:13:20.392308 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:13:20.397844 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:13:20.403422 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:13:20.409057 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:13:20.413521 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:13:20.414047 coreos-metadata[2621]: May 17 00:13:20.414 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:20.414289 jq[2625]: false May 17 00:13:20.414646 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:13:20.416811 coreos-metadata[2621]: May 17 00:13:20.416 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:20.419382 dbus-daemon[2622]: [system] SELinux support is enabled May 17 00:13:20.420266 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:13:20.425880 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:13:20.429086 extend-filesystems[2627]: Found loop4 May 17 00:13:20.435130 extend-filesystems[2627]: Found loop5 May 17 00:13:20.435130 extend-filesystems[2627]: Found loop6 May 17 00:13:20.435130 extend-filesystems[2627]: Found loop7 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p1 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p2 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p3 May 17 00:13:20.435130 extend-filesystems[2627]: Found usr May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p4 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p6 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p7 May 17 00:13:20.435130 extend-filesystems[2627]: Found nvme0n1p9 May 17 00:13:20.435130 extend-filesystems[2627]: Checking size of /dev/nvme0n1p9 May 17 00:13:20.567390 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 17 00:13:20.567424 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2413) May 17 00:13:20.431735 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:13:20.567489 extend-filesystems[2627]: Resized partition /dev/nvme0n1p9 May 17 00:13:20.565291 dbus-daemon[2622]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:13:20.443856 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:13:20.572065 extend-filesystems[2649]: resize2fs 1.47.1 (20-May-2024) May 17 00:13:20.449950 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:13:20.490881 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:13:20.491505 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:13:20.492221 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:13:20.581973 update_engine[2662]: I20250517 00:13:20.540064 2662 main.cc:92] Flatcar Update Engine starting May 17 00:13:20.581973 update_engine[2662]: I20250517 00:13:20.542500 2662 update_check_scheduler.cc:74] Next update check in 7m35s May 17 00:13:20.499073 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:13:20.582225 jq[2663]: true May 17 00:13:20.507473 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:13:20.520423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:13:20.582519 tar[2665]: linux-arm64/helm May 17 00:13:20.520710 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:13:20.582751 jq[2666]: true May 17 00:13:20.520985 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:13:20.521131 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:13:20.526560 systemd-logind[2650]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:13:20.530058 systemd-logind[2650]: New seat seat0. May 17 00:13:20.530865 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:13:20.531023 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:13:20.545371 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:13:20.563879 (ntainerd)[2668]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:13:20.570744 systemd[1]: Started update-engine.service - Update Engine. May 17 00:13:20.585539 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:13:20.585702 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:13:20.590469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:13:20.590575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:13:20.598048 bash[2693]: Updated "/home/core/.ssh/authorized_keys" May 17 00:13:20.617808 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:13:20.625075 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:13:20.632342 systemd[1]: Starting sshkeys.service... May 17 00:13:20.645704 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:13:20.645709 locksmithd[2697]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:13:20.651894 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:13:20.671559 coreos-metadata[2712]: May 17 00:13:20.671 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:13:20.672606 coreos-metadata[2712]: May 17 00:13:20.672 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:20.701043 containerd[2668]: time="2025-05-17T00:13:20.700932240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:13:20.722800 containerd[2668]: time="2025-05-17T00:13:20.722758040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724080 containerd[2668]: time="2025-05-17T00:13:20.724049120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:20.724109 containerd[2668]: time="2025-05-17T00:13:20.724081440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:13:20.724109 containerd[2668]: time="2025-05-17T00:13:20.724097160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:13:20.724277 containerd[2668]: time="2025-05-17T00:13:20.724259520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:13:20.724299 containerd[2668]: time="2025-05-17T00:13:20.724278360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724354 containerd[2668]: time="2025-05-17T00:13:20.724338680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:20.724375 containerd[2668]: time="2025-05-17T00:13:20.724353200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724521 containerd[2668]: time="2025-05-17T00:13:20.724502520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:20.724541 containerd[2668]: time="2025-05-17T00:13:20.724518920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724541 containerd[2668]: time="2025-05-17T00:13:20.724531480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:20.724577 containerd[2668]: time="2025-05-17T00:13:20.724540600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724635 containerd[2668]: time="2025-05-17T00:13:20.724620960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724823 containerd[2668]: time="2025-05-17T00:13:20.724806200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:13:20.724947 containerd[2668]: time="2025-05-17T00:13:20.724929800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:13:20.724968 containerd[2668]: time="2025-05-17T00:13:20.724946160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:13:20.725041 containerd[2668]: time="2025-05-17T00:13:20.725027680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:13:20.725081 containerd[2668]: time="2025-05-17T00:13:20.725069520Z" level=info msg="metadata content store policy set" policy=shared May 17 00:13:20.732523 containerd[2668]: time="2025-05-17T00:13:20.732498640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:13:20.732573 containerd[2668]: time="2025-05-17T00:13:20.732537600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:13:20.732573 containerd[2668]: time="2025-05-17T00:13:20.732553200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:13:20.732646 containerd[2668]: time="2025-05-17T00:13:20.732575720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:13:20.732646 containerd[2668]: time="2025-05-17T00:13:20.732596160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:13:20.732746 containerd[2668]: time="2025-05-17T00:13:20.732725920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:13:20.732981 containerd[2668]: time="2025-05-17T00:13:20.732952640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:13:20.733137 containerd[2668]: time="2025-05-17T00:13:20.733121160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:13:20.733161 containerd[2668]: time="2025-05-17T00:13:20.733139800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:13:20.733161 containerd[2668]: time="2025-05-17T00:13:20.733152880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:13:20.733194 containerd[2668]: time="2025-05-17T00:13:20.733167240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733194 containerd[2668]: time="2025-05-17T00:13:20.733181360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733226 containerd[2668]: time="2025-05-17T00:13:20.733193960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733226 containerd[2668]: time="2025-05-17T00:13:20.733207600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733226 containerd[2668]: time="2025-05-17T00:13:20.733221440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733277 containerd[2668]: time="2025-05-17T00:13:20.733234720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733277 containerd[2668]: time="2025-05-17T00:13:20.733247880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733277 containerd[2668]: time="2025-05-17T00:13:20.733259080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:13:20.733324 containerd[2668]: time="2025-05-17T00:13:20.733278040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733324 containerd[2668]: time="2025-05-17T00:13:20.733292080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733324 containerd[2668]: time="2025-05-17T00:13:20.733304800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733380 containerd[2668]: time="2025-05-17T00:13:20.733322920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733380 containerd[2668]: time="2025-05-17T00:13:20.733335320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733380 containerd[2668]: time="2025-05-17T00:13:20.733347840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733380 containerd[2668]: time="2025-05-17T00:13:20.733359400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733380 containerd[2668]: time="2025-05-17T00:13:20.733371720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733463 containerd[2668]: time="2025-05-17T00:13:20.733384640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733463 containerd[2668]: time="2025-05-17T00:13:20.733399320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733463 containerd[2668]: time="2025-05-17T00:13:20.733410680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733463 containerd[2668]: time="2025-05-17T00:13:20.733422280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733463 containerd[2668]: time="2025-05-17T00:13:20.733434240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733463 containerd[2668]: time="2025-05-17T00:13:20.733450040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:13:20.733563 containerd[2668]: time="2025-05-17T00:13:20.733469640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733563 containerd[2668]: time="2025-05-17T00:13:20.733482320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:13:20.733563 containerd[2668]: time="2025-05-17T00:13:20.733494200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:13:20.734388 containerd[2668]: time="2025-05-17T00:13:20.734328520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:13:20.734410 containerd[2668]: time="2025-05-17T00:13:20.734396400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:13:20.734436 containerd[2668]: time="2025-05-17T00:13:20.734413920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:13:20.734455 containerd[2668]: time="2025-05-17T00:13:20.734443560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:13:20.734641 containerd[2668]: time="2025-05-17T00:13:20.734622160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:13:20.734661 containerd[2668]: time="2025-05-17T00:13:20.734647920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:13:20.734680 containerd[2668]: time="2025-05-17T00:13:20.734660480Z" level=info msg="NRI interface is disabled by configuration." May 17 00:13:20.734680 containerd[2668]: time="2025-05-17T00:13:20.734672080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:13:20.735037 containerd[2668]: time="2025-05-17T00:13:20.734988280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:13:20.735139 containerd[2668]: time="2025-05-17T00:13:20.735044600Z" level=info msg="Connect containerd service" May 17 00:13:20.735139 containerd[2668]: time="2025-05-17T00:13:20.735073080Z" level=info msg="using legacy CRI server" May 17 00:13:20.735139 containerd[2668]: time="2025-05-17T00:13:20.735079760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:13:20.735193 containerd[2668]: time="2025-05-17T00:13:20.735149760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:13:20.735785 containerd[2668]: time="2025-05-17T00:13:20.735759000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:13:20.736022 containerd[2668]: time="2025-05-17T00:13:20.735979320Z" level=info msg="Start subscribing containerd event" May 17 00:13:20.736048 containerd[2668]: time="2025-05-17T00:13:20.736039640Z" level=info msg="Start recovering state" May 17 00:13:20.736125 containerd[2668]: time="2025-05-17T00:13:20.736112320Z" level=info msg="Start event monitor" May 17 00:13:20.736145 containerd[2668]: time="2025-05-17T00:13:20.736128280Z" level=info msg="Start snapshots syncer" May 17 00:13:20.736145 containerd[2668]: time="2025-05-17T00:13:20.736138200Z" level=info msg="Start cni network conf syncer for default" May 17 00:13:20.736180 containerd[2668]: time="2025-05-17T00:13:20.736146440Z" level=info msg="Start streaming server" May 17 00:13:20.736228 containerd[2668]: time="2025-05-17T00:13:20.736214000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:13:20.736264 containerd[2668]: time="2025-05-17T00:13:20.736255120Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:13:20.736316 containerd[2668]: time="2025-05-17T00:13:20.736306040Z" level=info msg="containerd successfully booted in 0.036184s" May 17 00:13:20.736356 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:13:20.857775 tar[2665]: linux-arm64/LICENSE May 17 00:13:20.857869 tar[2665]: linux-arm64/README.md May 17 00:13:20.871174 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:13:20.943604 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 17 00:13:20.958875 extend-filesystems[2649]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:13:20.958875 extend-filesystems[2649]: old_desc_blocks = 1, new_desc_blocks = 112 May 17 00:13:20.958875 extend-filesystems[2649]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 17 00:13:20.989718 extend-filesystems[2627]: Resized filesystem in /dev/nvme0n1p9 May 17 00:13:20.989718 extend-filesystems[2627]: Found nvme1n1 May 17 00:13:20.961399 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:13:20.961711 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:13:21.216578 sshd_keygen[2657]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:13:21.249775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:13:21.272834 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:13:21.281966 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:13:21.282147 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:13:21.286594 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 00:13:21.309603 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 17 00:13:21.312408 systemd-networkd[2569]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:06:d9.network. May 17 00:13:21.321881 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:13:21.330185 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:13:21.336719 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:13:21.343016 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:13:21.348315 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:13:21.416952 coreos-metadata[2621]: May 17 00:13:21.416 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 00:13:21.417440 coreos-metadata[2621]: May 17 00:13:21.417 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:21.672695 coreos-metadata[2712]: May 17 00:13:21.672 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 00:13:21.673108 coreos-metadata[2712]: May 17 00:13:21.673 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 17 00:13:21.868604 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 00:13:21.885089 systemd-networkd[2569]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 00:13:21.885593 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 17 00:13:21.886405 systemd-networkd[2569]: enP1p1s0f0np0: Link UP May 17 00:13:21.886717 systemd-networkd[2569]: enP1p1s0f0np0: Gained carrier May 17 00:13:21.904596 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 00:13:21.916052 systemd-networkd[2569]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:06:d8.network. May 17 00:13:21.916341 systemd-networkd[2569]: enP1p1s0f1np1: Link UP May 17 00:13:21.916592 systemd-networkd[2569]: enP1p1s0f1np1: Gained carrier May 17 00:13:21.928880 systemd-networkd[2569]: bond0: Link UP May 17 00:13:21.929172 systemd-networkd[2569]: bond0: Gained carrier May 17 00:13:21.929340 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:21.929878 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:21.930216 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:21.930361 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:22.010925 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 17 00:13:22.010957 kernel: bond0: active interface up! May 17 00:13:22.135603 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 17 00:13:23.417529 coreos-metadata[2621]: May 17 00:13:23.417 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 00:13:23.527659 systemd-networkd[2569]: bond0: Gained IPv6LL May 17 00:13:23.527971 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:23.591973 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:23.592082 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:23.593904 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:13:23.599689 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:13:23.617861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:23.624544 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:13:23.646230 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:13:23.673230 coreos-metadata[2712]: May 17 00:13:23.673 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 00:13:24.249222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:24.255168 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:24.683068 kubelet[2778]: E0517 00:13:24.683034 2778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:24.685291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:24.685435 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:25.007442 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 17 00:13:25.007755 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 17 00:13:25.689894 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:13:25.705837 systemd[1]: Started sshd@0-147.28.151.230:22-218.92.0.158:42104.service - OpenSSH per-connection server daemon (218.92.0.158:42104). May 17 00:13:25.797978 systemd[1]: Started sshd@1-147.28.151.230:22-147.75.109.163:41230.service - OpenSSH per-connection server daemon (147.75.109.163:41230). May 17 00:13:25.944180 coreos-metadata[2621]: May 17 00:13:25.944 INFO Fetch successful May 17 00:13:26.008450 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:13:26.015201 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 17 00:13:26.018188 coreos-metadata[2712]: May 17 00:13:26.018 INFO Fetch successful May 17 00:13:26.068036 unknown[2712]: wrote ssh authorized keys file for user: core May 17 00:13:26.093664 update-ssh-keys[2817]: Updated "/home/core/.ssh/authorized_keys" May 17 00:13:26.095680 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:13:26.102048 systemd[1]: Finished sshkeys.service. May 17 00:13:26.235409 sshd[2808]: Accepted publickey for core from 147.75.109.163 port 41230 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:26.237249 sshd[2808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:26.245018 systemd-logind[2650]: New session 1 of user core. May 17 00:13:26.246196 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:13:26.264852 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:13:26.274464 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:13:26.283673 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:13:26.291924 (systemd)[2824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:13:26.362881 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 17 00:13:26.363348 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:13:26.375513 login[2756]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:26.376244 login[2757]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:26.378785 systemd-logind[2650]: New session 2 of user core. May 17 00:13:26.381044 systemd-logind[2650]: New session 3 of user core. May 17 00:13:26.387195 systemd[2824]: Queued start job for default target default.target. May 17 00:13:26.400693 systemd[2824]: Created slice app.slice - User Application Slice. May 17 00:13:26.400718 systemd[2824]: Reached target paths.target - Paths. May 17 00:13:26.400730 systemd[2824]: Reached target timers.target - Timers. May 17 00:13:26.401982 systemd[2824]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:13:26.410955 systemd[2824]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:13:26.411008 systemd[2824]: Reached target sockets.target - Sockets. May 17 00:13:26.411020 systemd[2824]: Reached target basic.target - Basic System. May 17 00:13:26.411060 systemd[2824]: Reached target default.target - Main User Target. May 17 00:13:26.411083 systemd[2824]: Startup finished in 114ms. May 17 00:13:26.411392 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:13:26.412835 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:13:26.413664 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:13:26.414477 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:13:26.415053 systemd[1]: Startup finished in 3.223s (kernel) + 21.516s (initrd) + 9.538s (userspace) = 34.278s. May 17 00:13:26.725962 systemd[1]: Started sshd@2-147.28.151.230:22-147.75.109.163:41234.service - OpenSSH per-connection server daemon (147.75.109.163:41234). May 17 00:13:27.145790 sshd[2866]: Accepted publickey for core from 147.75.109.163 port 41234 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:27.147045 sshd[2866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:27.149867 systemd-logind[2650]: New session 4 of user core. May 17 00:13:27.164703 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:13:27.444237 sshd[2866]: pam_unix(sshd:session): session closed for user core May 17 00:13:27.447786 systemd[1]: sshd@2-147.28.151.230:22-147.75.109.163:41234.service: Deactivated successfully. May 17 00:13:27.450111 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:13:27.450666 systemd-logind[2650]: Session 4 logged out. Waiting for processes to exit. May 17 00:13:27.451247 systemd-logind[2650]: Removed session 4. May 17 00:13:27.520874 systemd[1]: Started sshd@3-147.28.151.230:22-147.75.109.163:41236.service - OpenSSH per-connection server daemon (147.75.109.163:41236). May 17 00:13:27.964526 sshd[2874]: Accepted publickey for core from 147.75.109.163 port 41236 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:27.965767 sshd[2874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:27.968693 systemd-logind[2650]: New session 5 of user core. May 17 00:13:27.977695 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:13:28.131135 sshd[2876]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:13:28.268881 sshd[2874]: pam_unix(sshd:session): session closed for user core May 17 00:13:28.272599 systemd[1]: sshd@3-147.28.151.230:22-147.75.109.163:41236.service: Deactivated successfully. May 17 00:13:28.274345 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:13:28.274987 systemd-logind[2650]: Session 5 logged out. Waiting for processes to exit. May 17 00:13:28.275600 systemd-logind[2650]: Removed session 5. May 17 00:13:28.339899 systemd[1]: Started sshd@4-147.28.151.230:22-147.75.109.163:33400.service - OpenSSH per-connection server daemon (147.75.109.163:33400). May 17 00:13:28.768143 sshd[2882]: Accepted publickey for core from 147.75.109.163 port 33400 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:28.769259 sshd[2882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:28.772165 systemd-logind[2650]: New session 6 of user core. May 17 00:13:28.784755 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:13:29.064622 sshd[2882]: pam_unix(sshd:session): session closed for user core May 17 00:13:29.067203 systemd[1]: sshd@4-147.28.151.230:22-147.75.109.163:33400.service: Deactivated successfully. May 17 00:13:29.069009 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:13:29.069460 systemd-logind[2650]: Session 6 logged out. Waiting for processes to exit. May 17 00:13:29.070029 systemd-logind[2650]: Removed session 6. May 17 00:13:29.143703 systemd[1]: Started sshd@5-147.28.151.230:22-147.75.109.163:33416.service - OpenSSH per-connection server daemon (147.75.109.163:33416). May 17 00:13:29.567703 sshd[2889]: Accepted publickey for core from 147.75.109.163 port 33416 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:29.568803 sshd[2889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:29.571658 systemd-logind[2650]: New session 7 of user core. May 17 00:13:29.583687 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:13:29.661996 sshd[2805]: PAM: Permission denied for root from 218.92.0.158 May 17 00:13:29.820531 sudo[2892]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:13:29.820828 sudo[2892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:29.833387 sudo[2892]: pam_unix(sudo:session): session closed for user root May 17 00:13:29.899011 sshd[2889]: pam_unix(sshd:session): session closed for user core May 17 00:13:29.901934 systemd[1]: sshd@5-147.28.151.230:22-147.75.109.163:33416.service: Deactivated successfully. May 17 00:13:29.903389 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:13:29.903894 systemd-logind[2650]: Session 7 logged out. Waiting for processes to exit. May 17 00:13:29.904467 systemd-logind[2650]: Removed session 7. May 17 00:13:29.966806 systemd[1]: Started sshd@6-147.28.151.230:22-147.75.109.163:33430.service - OpenSSH per-connection server daemon (147.75.109.163:33430). May 17 00:13:30.386214 sshd[2898]: Accepted publickey for core from 147.75.109.163 port 33430 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:30.387334 sshd[2898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:30.390075 systemd-logind[2650]: New session 8 of user core. May 17 00:13:30.400739 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:13:30.616998 sudo[2902]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:13:30.617269 sudo[2902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:30.619736 sudo[2902]: pam_unix(sudo:session): session closed for user root May 17 00:13:30.624062 sudo[2901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:13:30.624332 sudo[2901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:30.642893 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:13:30.643854 auditctl[2905]: No rules May 17 00:13:30.644654 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:13:30.645700 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:13:30.647436 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:13:30.670294 augenrules[2923]: No rules May 17 00:13:30.671546 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:13:30.672416 sudo[2901]: pam_unix(sudo:session): session closed for user root May 17 00:13:30.734727 sshd[2898]: pam_unix(sshd:session): session closed for user core May 17 00:13:30.737458 systemd[1]: sshd@6-147.28.151.230:22-147.75.109.163:33430.service: Deactivated successfully. May 17 00:13:30.739721 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:13:30.740211 systemd-logind[2650]: Session 8 logged out. Waiting for processes to exit. May 17 00:13:30.740773 systemd-logind[2650]: Removed session 8. May 17 00:13:30.805963 systemd[1]: Started sshd@7-147.28.151.230:22-147.75.109.163:33436.service - OpenSSH per-connection server daemon (147.75.109.163:33436). May 17 00:13:31.177071 sshd[2894]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:13:31.227070 sshd[2932]: Accepted publickey for core from 147.75.109.163 port 33436 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:13:31.228180 sshd[2932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:31.231091 systemd-logind[2650]: New session 9 of user core. May 17 00:13:31.244706 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:13:31.458145 sudo[2935]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:13:31.458420 sudo[2935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:31.733798 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:13:31.733920 (dockerd)[2967]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:13:31.947696 dockerd[2967]: time="2025-05-17T00:13:31.947646520Z" level=info msg="Starting up" May 17 00:13:32.006500 dockerd[2967]: time="2025-05-17T00:13:32.006422560Z" level=info msg="Loading containers: start." May 17 00:13:32.088598 kernel: Initializing XFRM netlink socket May 17 00:13:32.106415 systemd-timesyncd[2571]: Network configuration changed, trying to establish connection. May 17 00:13:32.124304 systemd-timesyncd[2571]: Contacted time server [2606:4700:f1::123]:123 (2.flatcar.pool.ntp.org). May 17 00:13:32.124358 systemd-timesyncd[2571]: Initial clock synchronization to Sat 2025-05-17 00:13:32.118619 UTC. May 17 00:13:32.163320 systemd-networkd[2569]: docker0: Link UP May 17 00:13:32.179759 dockerd[2967]: time="2025-05-17T00:13:32.179731000Z" level=info msg="Loading containers: done." May 17 00:13:32.188509 dockerd[2967]: time="2025-05-17T00:13:32.188478520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:13:32.188579 dockerd[2967]: time="2025-05-17T00:13:32.188558800Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:13:32.188685 dockerd[2967]: time="2025-05-17T00:13:32.188669040Z" level=info msg="Daemon has completed initialization" May 17 00:13:32.210044 dockerd[2967]: time="2025-05-17T00:13:32.209928360Z" level=info msg="API listen on /run/docker.sock" May 17 00:13:32.210083 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:13:32.997509 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3778275438-merged.mount: Deactivated successfully. May 17 00:13:33.100755 containerd[2668]: time="2025-05-17T00:13:33.100718293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:13:33.319552 sshd[2805]: PAM: Permission denied for root from 218.92.0.158 May 17 00:13:33.771853 sshd[3162]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:13:34.143184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497898332.mount: Deactivated successfully. May 17 00:13:34.820930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:13:34.831813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:34.937877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:34.941485 (kubelet)[3228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:34.999244 kubelet[3228]: E0517 00:13:34.999211 3228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:35.002157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:35.002293 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:35.513809 containerd[2668]: time="2025-05-17T00:13:35.513772179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:35.514093 containerd[2668]: time="2025-05-17T00:13:35.513811927Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651974" May 17 00:13:35.514958 containerd[2668]: time="2025-05-17T00:13:35.514931669Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:35.517709 containerd[2668]: time="2025-05-17T00:13:35.517681918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:35.518803 containerd[2668]: time="2025-05-17T00:13:35.518783625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 2.418025625s" May 17 00:13:35.518830 containerd[2668]: time="2025-05-17T00:13:35.518811696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:13:35.520001 containerd[2668]: time="2025-05-17T00:13:35.519980103Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:13:35.794128 sshd[2805]: PAM: Permission denied for root from 218.92.0.158 May 17 00:13:36.936387 sshd[2805]: Received disconnect from 218.92.0.158 port 42104:11: [preauth] May 17 00:13:36.936387 sshd[2805]: Disconnected from authenticating user root 218.92.0.158 port 42104 [preauth] May 17 00:13:36.938757 systemd[1]: sshd@0-147.28.151.230:22-218.92.0.158:42104.service: Deactivated successfully. May 17 00:13:37.067489 containerd[2668]: time="2025-05-17T00:13:37.067452736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:37.067703 containerd[2668]: time="2025-05-17T00:13:37.067512800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459528" May 17 00:13:37.068553 containerd[2668]: time="2025-05-17T00:13:37.068533769Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:37.071334 containerd[2668]: time="2025-05-17T00:13:37.071313151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:37.072410 containerd[2668]: time="2025-05-17T00:13:37.072379347Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.552368013s" May 17 00:13:37.072459 containerd[2668]: time="2025-05-17T00:13:37.072416138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:13:37.072806 containerd[2668]: time="2025-05-17T00:13:37.072786879Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:13:38.351087 containerd[2668]: time="2025-05-17T00:13:38.351044398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:38.351401 containerd[2668]: time="2025-05-17T00:13:38.351104983Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125279" May 17 00:13:38.352149 containerd[2668]: time="2025-05-17T00:13:38.352124450Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:38.354941 containerd[2668]: time="2025-05-17T00:13:38.354921113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:38.356034 containerd[2668]: time="2025-05-17T00:13:38.356007643Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.283191491s" May 17 00:13:38.356067 containerd[2668]: time="2025-05-17T00:13:38.356041914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:13:38.356429 containerd[2668]: time="2025-05-17T00:13:38.356410542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:13:38.903855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4135759.mount: Deactivated successfully. May 17 00:13:39.594515 containerd[2668]: time="2025-05-17T00:13:39.594474578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:39.594758 containerd[2668]: time="2025-05-17T00:13:39.594515768Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871375" May 17 00:13:39.595271 containerd[2668]: time="2025-05-17T00:13:39.595249677Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:39.598483 containerd[2668]: time="2025-05-17T00:13:39.598452610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:39.599315 containerd[2668]: time="2025-05-17T00:13:39.599287015Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.242845679s" May 17 00:13:39.599345 containerd[2668]: time="2025-05-17T00:13:39.599321007Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:13:39.599684 containerd[2668]: time="2025-05-17T00:13:39.599660088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:13:40.707132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310138662.mount: Deactivated successfully. May 17 00:13:41.891022 containerd[2668]: time="2025-05-17T00:13:41.890971968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:41.891319 containerd[2668]: time="2025-05-17T00:13:41.891030635Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 17 00:13:41.892203 containerd[2668]: time="2025-05-17T00:13:41.892173841Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:41.895148 containerd[2668]: time="2025-05-17T00:13:41.895123276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:41.896345 containerd[2668]: time="2025-05-17T00:13:41.896318071Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.296620431s" May 17 00:13:41.896399 containerd[2668]: time="2025-05-17T00:13:41.896353104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:13:41.896716 containerd[2668]: time="2025-05-17T00:13:41.896696633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:13:42.168437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121471012.mount: Deactivated successfully. May 17 00:13:42.169119 containerd[2668]: time="2025-05-17T00:13:42.169086878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:42.169218 containerd[2668]: time="2025-05-17T00:13:42.169167143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 17 00:13:42.169877 containerd[2668]: time="2025-05-17T00:13:42.169854970Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:42.171860 containerd[2668]: time="2025-05-17T00:13:42.171838469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:42.172726 containerd[2668]: time="2025-05-17T00:13:42.172699823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 275.971036ms" May 17 00:13:42.172754 containerd[2668]: time="2025-05-17T00:13:42.172731777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:13:42.173022 containerd[2668]: time="2025-05-17T00:13:42.173001965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:13:42.874320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103359538.mount: Deactivated successfully. May 17 00:13:45.071794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:13:45.085805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:45.188749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:45.192338 (kubelet)[3400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:45.221565 kubelet[3400]: E0517 00:13:45.221526 3400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:45.223870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:45.224007 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:46.131096 containerd[2668]: time="2025-05-17T00:13:46.131053232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:46.131406 containerd[2668]: time="2025-05-17T00:13:46.131116063Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 17 00:13:46.132302 containerd[2668]: time="2025-05-17T00:13:46.132275651Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:46.135626 containerd[2668]: time="2025-05-17T00:13:46.135597477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:46.136983 containerd[2668]: time="2025-05-17T00:13:46.136953916Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.963921996s" May 17 00:13:46.137008 containerd[2668]: time="2025-05-17T00:13:46.136989950Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:13:51.764943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:51.778863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:51.796374 systemd[1]: Reloading requested from client PID 3498 ('systemctl') (unit session-9.scope)... May 17 00:13:51.796390 systemd[1]: Reloading... May 17 00:13:51.848601 zram_generator::config[3541]: No configuration found. May 17 00:13:51.941169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:13:52.013958 systemd[1]: Reloading finished in 217 ms. May 17 00:13:52.058258 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:52.060693 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:13:52.061673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:52.063268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:52.170998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:52.174647 (kubelet)[3607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:13:52.204519 kubelet[3607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:13:52.204519 kubelet[3607]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:13:52.204519 kubelet[3607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:13:52.204766 kubelet[3607]: I0517 00:13:52.204570 3607 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:13:52.863130 kubelet[3607]: I0517 00:13:52.863095 3607 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:13:52.863130 kubelet[3607]: I0517 00:13:52.863120 3607 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:13:52.863335 kubelet[3607]: I0517 00:13:52.863317 3607 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:13:52.882721 kubelet[3607]: E0517 00:13:52.882697 3607 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.151.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:52.883773 kubelet[3607]: I0517 00:13:52.883747 3607 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:13:52.889454 kubelet[3607]: E0517 00:13:52.889432 3607 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:13:52.889482 kubelet[3607]: I0517 00:13:52.889454 3607 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:13:52.910170 kubelet[3607]: I0517 00:13:52.910143 3607 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:13:52.910962 kubelet[3607]: I0517 00:13:52.910946 3607 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:13:52.911103 kubelet[3607]: I0517 00:13:52.911082 3607 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:13:52.911269 kubelet[3607]: I0517 00:13:52.911106 3607 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-3bfd76e738","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:13:52.911353 kubelet[3607]: I0517 00:13:52.911345 3607 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:13:52.911377 kubelet[3607]: I0517 00:13:52.911355 3607 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:13:52.911604 kubelet[3607]: I0517 00:13:52.911585 3607 state_mem.go:36] "Initialized new in-memory state store" May 17 00:13:52.913805 kubelet[3607]: I0517 00:13:52.913785 3607 kubelet.go:408] "Attempting to sync node with API server" May 17 00:13:52.913834 kubelet[3607]: I0517 00:13:52.913816 3607 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:13:52.913855 kubelet[3607]: I0517 00:13:52.913839 3607 kubelet.go:314] "Adding apiserver pod source" May 17 00:13:52.913855 kubelet[3607]: I0517 00:13:52.913853 3607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:13:52.918493 kubelet[3607]: W0517 00:13:52.918452 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-3bfd76e738&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:52.918521 kubelet[3607]: W0517 00:13:52.918459 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:52.918521 kubelet[3607]: E0517 00:13:52.918505 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-3bfd76e738&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:52.918564 kubelet[3607]: E0517 00:13:52.918517 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:52.918564 kubelet[3607]: I0517 00:13:52.918495 3607 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:13:52.919294 kubelet[3607]: I0517 00:13:52.919281 3607 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:13:52.919460 kubelet[3607]: W0517 00:13:52.919453 3607 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:13:52.921380 kubelet[3607]: I0517 00:13:52.921355 3607 server.go:1274] "Started kubelet" May 17 00:13:52.921979 kubelet[3607]: I0517 00:13:52.921786 3607 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:13:52.921979 kubelet[3607]: I0517 00:13:52.921806 3607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:13:52.922148 kubelet[3607]: I0517 00:13:52.922133 3607 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:13:52.922918 kubelet[3607]: I0517 00:13:52.922880 3607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:13:52.922977 kubelet[3607]: I0517 00:13:52.922937 3607 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:13:52.924025 kubelet[3607]: E0517 00:13:52.923979 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:52.924025 kubelet[3607]: I0517 00:13:52.923998 3607 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:13:52.924127 kubelet[3607]: I0517 00:13:52.922960 3607 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:13:52.924447 kubelet[3607]: I0517 00:13:52.924151 3607 reconciler.go:26] "Reconciler: start to sync state" May 17 00:13:52.924583 kubelet[3607]: W0517 00:13:52.924545 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:52.924638 kubelet[3607]: E0517 00:13:52.924603 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.151.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:52.924638 kubelet[3607]: E0517 00:13:52.924565 3607 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:13:52.925183 kubelet[3607]: E0517 00:13:52.925136 3607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-3bfd76e738?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="200ms" May 17 00:13:52.925413 kubelet[3607]: I0517 00:13:52.925397 3607 factory.go:221] Registration of the containerd container factory successfully May 17 00:13:52.925413 kubelet[3607]: I0517 00:13:52.925414 3607 factory.go:221] Registration of the systemd container factory successfully May 17 00:13:52.925487 kubelet[3607]: I0517 00:13:52.925470 3607 server.go:449] "Adding debug handlers to kubelet server" May 17 00:13:52.925517 kubelet[3607]: I0517 00:13:52.925492 3607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:13:52.925945 kubelet[3607]: E0517 00:13:52.924862 3607 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.151.230:6443/api/v1/namespaces/default/events\": dial tcp 147.28.151.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-3bfd76e738.1840282de0494f4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-3bfd76e738,UID:ci-4081.3.3-n-3bfd76e738,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-3bfd76e738,},FirstTimestamp:2025-05-17 00:13:52.921292622 +0000 UTC m=+0.743866503,LastTimestamp:2025-05-17 00:13:52.921292622 +0000 UTC m=+0.743866503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-3bfd76e738,}" May 17 00:13:52.937247 kubelet[3607]: I0517 00:13:52.937210 3607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:13:52.938215 kubelet[3607]: I0517 00:13:52.938198 3607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:13:52.938247 kubelet[3607]: I0517 00:13:52.938219 3607 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:13:52.938247 kubelet[3607]: I0517 00:13:52.938236 3607 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:13:52.938290 kubelet[3607]: E0517 00:13:52.938278 3607 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:13:52.939215 kubelet[3607]: W0517 00:13:52.939170 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:52.939288 kubelet[3607]: E0517 00:13:52.939226 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:52.939288 kubelet[3607]: I0517 00:13:52.939254 3607 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:13:52.939288 kubelet[3607]: I0517 00:13:52.939270 3607 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:13:52.939288 kubelet[3607]: I0517 00:13:52.939287 3607 state_mem.go:36] "Initialized new in-memory state store" May 17 00:13:52.939960 kubelet[3607]: I0517 00:13:52.939946 3607 policy_none.go:49] "None policy: Start" May 17 00:13:52.940381 kubelet[3607]: I0517 00:13:52.940365 3607 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:13:52.940426 kubelet[3607]: I0517 00:13:52.940392 3607 state_mem.go:35] "Initializing new in-memory state store" May 17 00:13:52.943878 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:13:52.961674 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:13:52.964156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:13:52.978309 kubelet[3607]: I0517 00:13:52.978287 3607 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:13:52.978497 kubelet[3607]: I0517 00:13:52.978484 3607 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:13:52.978526 kubelet[3607]: I0517 00:13:52.978498 3607 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:13:52.978701 kubelet[3607]: I0517 00:13:52.978685 3607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:13:52.979492 kubelet[3607]: E0517 00:13:52.979474 3607 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:53.045835 systemd[1]: Created slice kubepods-burstable-pod49874cfba95cca5951871ec6fc070229.slice - libcontainer container kubepods-burstable-pod49874cfba95cca5951871ec6fc070229.slice. May 17 00:13:53.068281 systemd[1]: Created slice kubepods-burstable-podd3e222b6c2d6df9b16b9dc3f89574cd4.slice - libcontainer container kubepods-burstable-podd3e222b6c2d6df9b16b9dc3f89574cd4.slice. May 17 00:13:53.080348 kubelet[3607]: I0517 00:13:53.080324 3607 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.080783 kubelet[3607]: E0517 00:13:53.080755 3607 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.082455 systemd[1]: Created slice kubepods-burstable-pod2db7e8c69549782be9578c9ef83ff5b4.slice - libcontainer container kubepods-burstable-pod2db7e8c69549782be9578c9ef83ff5b4.slice. May 17 00:13:53.125466 kubelet[3607]: I0517 00:13:53.125357 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49874cfba95cca5951871ec6fc070229-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" (UID: \"49874cfba95cca5951871ec6fc070229\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125466 kubelet[3607]: I0517 00:13:53.125401 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49874cfba95cca5951871ec6fc070229-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" (UID: \"49874cfba95cca5951871ec6fc070229\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125466 kubelet[3607]: I0517 00:13:53.125428 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125466 kubelet[3607]: I0517 00:13:53.125450 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125703 kubelet[3607]: I0517 00:13:53.125473 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125703 kubelet[3607]: I0517 00:13:53.125523 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49874cfba95cca5951871ec6fc070229-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" (UID: \"49874cfba95cca5951871ec6fc070229\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125703 kubelet[3607]: I0517 00:13:53.125545 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125703 kubelet[3607]: I0517 00:13:53.125566 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.125703 kubelet[3607]: I0517 00:13:53.125595 3607 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2db7e8c69549782be9578c9ef83ff5b4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-3bfd76e738\" (UID: \"2db7e8c69549782be9578c9ef83ff5b4\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.126062 kubelet[3607]: E0517 00:13:53.126017 3607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-3bfd76e738?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="400ms" May 17 00:13:53.283846 kubelet[3607]: I0517 00:13:53.283819 3607 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.284213 kubelet[3607]: E0517 00:13:53.284119 3607 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.367346 containerd[2668]: time="2025-05-17T00:13:53.367277397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-3bfd76e738,Uid:49874cfba95cca5951871ec6fc070229,Namespace:kube-system,Attempt:0,}" May 17 00:13:53.380721 containerd[2668]: time="2025-05-17T00:13:53.380662491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-3bfd76e738,Uid:d3e222b6c2d6df9b16b9dc3f89574cd4,Namespace:kube-system,Attempt:0,}" May 17 00:13:53.385336 containerd[2668]: time="2025-05-17T00:13:53.385293133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-3bfd76e738,Uid:2db7e8c69549782be9578c9ef83ff5b4,Namespace:kube-system,Attempt:0,}" May 17 00:13:53.527272 kubelet[3607]: E0517 00:13:53.527236 3607 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-3bfd76e738?timeout=10s\": dial tcp 147.28.151.230:6443: connect: connection refused" interval="800ms" May 17 00:13:53.686365 kubelet[3607]: I0517 00:13:53.686304 3607 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.686569 kubelet[3607]: E0517 00:13:53.686542 3607 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.151.230:6443/api/v1/nodes\": dial tcp 147.28.151.230:6443: connect: connection refused" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:53.745751 kubelet[3607]: W0517 00:13:53.745701 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:53.745878 kubelet[3607]: E0517 00:13:53.745763 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:53.786118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258125544.mount: Deactivated successfully. May 17 00:13:53.786718 containerd[2668]: time="2025-05-17T00:13:53.786690543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:13:53.787147 containerd[2668]: time="2025-05-17T00:13:53.787126342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 17 00:13:53.787488 containerd[2668]: time="2025-05-17T00:13:53.787470429Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:13:53.787841 containerd[2668]: time="2025-05-17T00:13:53.787822756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:13:53.787938 containerd[2668]: time="2025-05-17T00:13:53.787921867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:13:53.788279 containerd[2668]: time="2025-05-17T00:13:53.788256555Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:13:53.792342 containerd[2668]: time="2025-05-17T00:13:53.792318331Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:13:53.793140 containerd[2668]: time="2025-05-17T00:13:53.793123535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 425.735789ms" May 17 00:13:53.793804 containerd[2668]: time="2025-05-17T00:13:53.793781433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 408.382749ms" May 17 00:13:53.795475 containerd[2668]: time="2025-05-17T00:13:53.795441956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:13:53.807387 kubelet[3607]: W0517 00:13:53.807341 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:53.807458 kubelet[3607]: E0517 00:13:53.807387 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:53.808079 containerd[2668]: time="2025-05-17T00:13:53.808051084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 427.30776ms" May 17 00:13:53.907670 containerd[2668]: time="2025-05-17T00:13:53.907586953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:13:53.907670 containerd[2668]: time="2025-05-17T00:13:53.907647827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:13:53.907784 containerd[2668]: time="2025-05-17T00:13:53.907633069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:13:53.907784 containerd[2668]: time="2025-05-17T00:13:53.907681704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:13:53.907784 containerd[2668]: time="2025-05-17T00:13:53.907692543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:53.907784 containerd[2668]: time="2025-05-17T00:13:53.907765976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:53.907906 containerd[2668]: time="2025-05-17T00:13:53.907658986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:53.907906 containerd[2668]: time="2025-05-17T00:13:53.907739578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:53.907962 containerd[2668]: time="2025-05-17T00:13:53.907908842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:13:53.908006 containerd[2668]: time="2025-05-17T00:13:53.907959158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:13:53.908040 containerd[2668]: time="2025-05-17T00:13:53.907971197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:53.908118 containerd[2668]: time="2025-05-17T00:13:53.908096825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:53.944784 systemd[1]: Started cri-containerd-19c3f5a257ef173ecfaff85d7954cea32e7edade4150218a64997788ee7b3cd9.scope - libcontainer container 19c3f5a257ef173ecfaff85d7954cea32e7edade4150218a64997788ee7b3cd9. May 17 00:13:53.946065 systemd[1]: Started cri-containerd-589e07e3f9c60e3d2913c17c747e073878c6100b884b2285c98c759ee746678b.scope - libcontainer container 589e07e3f9c60e3d2913c17c747e073878c6100b884b2285c98c759ee746678b. May 17 00:13:53.947332 systemd[1]: Started cri-containerd-6de3ca6148906a5d8424e2f8e4dfb21d9111b8eb644c714e24e14a22f1a05dc3.scope - libcontainer container 6de3ca6148906a5d8424e2f8e4dfb21d9111b8eb644c714e24e14a22f1a05dc3. May 17 00:13:53.952046 kubelet[3607]: W0517 00:13:53.952006 3607 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-3bfd76e738&limit=500&resourceVersion=0": dial tcp 147.28.151.230:6443: connect: connection refused May 17 00:13:53.952084 kubelet[3607]: E0517 00:13:53.952062 3607 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.151.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-3bfd76e738&limit=500&resourceVersion=0\": dial tcp 147.28.151.230:6443: connect: connection refused" logger="UnhandledError" May 17 00:13:53.967680 containerd[2668]: time="2025-05-17T00:13:53.967641755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-3bfd76e738,Uid:49874cfba95cca5951871ec6fc070229,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c3f5a257ef173ecfaff85d7954cea32e7edade4150218a64997788ee7b3cd9\"" May 17 00:13:53.969047 containerd[2668]: time="2025-05-17T00:13:53.969017905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-3bfd76e738,Uid:d3e222b6c2d6df9b16b9dc3f89574cd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"589e07e3f9c60e3d2913c17c747e073878c6100b884b2285c98c759ee746678b\"" May 17 00:13:53.969791 containerd[2668]: time="2025-05-17T00:13:53.969762754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-3bfd76e738,Uid:2db7e8c69549782be9578c9ef83ff5b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6de3ca6148906a5d8424e2f8e4dfb21d9111b8eb644c714e24e14a22f1a05dc3\"" May 17 00:13:53.970272 containerd[2668]: time="2025-05-17T00:13:53.970174996Z" level=info msg="CreateContainer within sandbox \"19c3f5a257ef173ecfaff85d7954cea32e7edade4150218a64997788ee7b3cd9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:13:53.972195 containerd[2668]: time="2025-05-17T00:13:53.972166007Z" level=info msg="CreateContainer within sandbox \"589e07e3f9c60e3d2913c17c747e073878c6100b884b2285c98c759ee746678b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:13:53.972771 containerd[2668]: time="2025-05-17T00:13:53.972712036Z" level=info msg="CreateContainer within sandbox \"6de3ca6148906a5d8424e2f8e4dfb21d9111b8eb644c714e24e14a22f1a05dc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:13:53.976344 containerd[2668]: time="2025-05-17T00:13:53.976316255Z" level=info msg="CreateContainer within sandbox \"19c3f5a257ef173ecfaff85d7954cea32e7edade4150218a64997788ee7b3cd9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c045231c978209f4cc0307a76b28686414ad66993c32e468995df1c9c9fbec9\"" May 17 00:13:53.976971 containerd[2668]: time="2025-05-17T00:13:53.976731056Z" level=info msg="StartContainer for \"5c045231c978209f4cc0307a76b28686414ad66993c32e468995df1c9c9fbec9\"" May 17 00:13:53.977252 containerd[2668]: time="2025-05-17T00:13:53.977224849Z" level=info msg="CreateContainer within sandbox \"589e07e3f9c60e3d2913c17c747e073878c6100b884b2285c98c759ee746678b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d2732d7fc9c9de09d6b52752c09f2b444b0f9e892682d3f06c94c328803646a\"" May 17 00:13:53.977842 containerd[2668]: time="2025-05-17T00:13:53.977512702Z" level=info msg="StartContainer for \"7d2732d7fc9c9de09d6b52752c09f2b444b0f9e892682d3f06c94c328803646a\"" May 17 00:13:53.978566 containerd[2668]: time="2025-05-17T00:13:53.978539445Z" level=info msg="CreateContainer within sandbox \"6de3ca6148906a5d8424e2f8e4dfb21d9111b8eb644c714e24e14a22f1a05dc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4f1cc69265ed844c5f8e52b3ec070331e5193136fa7db8c9b666de41ddafce8\"" May 17 00:13:53.978823 containerd[2668]: time="2025-05-17T00:13:53.978804700Z" level=info msg="StartContainer for \"c4f1cc69265ed844c5f8e52b3ec070331e5193136fa7db8c9b666de41ddafce8\"" May 17 00:13:54.007766 systemd[1]: Started cri-containerd-5c045231c978209f4cc0307a76b28686414ad66993c32e468995df1c9c9fbec9.scope - libcontainer container 5c045231c978209f4cc0307a76b28686414ad66993c32e468995df1c9c9fbec9. May 17 00:13:54.008893 systemd[1]: Started cri-containerd-7d2732d7fc9c9de09d6b52752c09f2b444b0f9e892682d3f06c94c328803646a.scope - libcontainer container 7d2732d7fc9c9de09d6b52752c09f2b444b0f9e892682d3f06c94c328803646a. May 17 00:13:54.009967 systemd[1]: Started cri-containerd-c4f1cc69265ed844c5f8e52b3ec070331e5193136fa7db8c9b666de41ddafce8.scope - libcontainer container c4f1cc69265ed844c5f8e52b3ec070331e5193136fa7db8c9b666de41ddafce8. May 17 00:13:54.032377 containerd[2668]: time="2025-05-17T00:13:54.032326461Z" level=info msg="StartContainer for \"5c045231c978209f4cc0307a76b28686414ad66993c32e468995df1c9c9fbec9\" returns successfully" May 17 00:13:54.046343 containerd[2668]: time="2025-05-17T00:13:54.046298103Z" level=info msg="StartContainer for \"c4f1cc69265ed844c5f8e52b3ec070331e5193136fa7db8c9b666de41ddafce8\" returns successfully" May 17 00:13:54.046411 containerd[2668]: time="2025-05-17T00:13:54.046298863Z" level=info msg="StartContainer for \"7d2732d7fc9c9de09d6b52752c09f2b444b0f9e892682d3f06c94c328803646a\" returns successfully" May 17 00:13:54.488858 kubelet[3607]: I0517 00:13:54.488832 3607 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:54.882964 systemd[1]: Started sshd@8-147.28.151.230:22-34.94.79.79:34766.service - OpenSSH per-connection server daemon (34.94.79.79:34766). May 17 00:13:55.158565 sshd[3978]: Invalid user from 34.94.79.79 port 34766 May 17 00:13:55.320213 kubelet[3607]: E0517 00:13:55.320177 3607 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-3bfd76e738\" not found" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:55.424006 kubelet[3607]: I0517 00:13:55.423935 3607 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:55.424006 kubelet[3607]: E0517 00:13:55.423968 3607 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-3bfd76e738\": node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:55.432271 kubelet[3607]: E0517 00:13:55.432240 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:55.533230 kubelet[3607]: E0517 00:13:55.533205 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:55.633638 kubelet[3607]: E0517 00:13:55.633611 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:55.733742 kubelet[3607]: E0517 00:13:55.733673 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:55.834137 kubelet[3607]: E0517 00:13:55.834114 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:55.935064 kubelet[3607]: E0517 00:13:55.935040 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.035395 kubelet[3607]: E0517 00:13:56.035321 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.135778 kubelet[3607]: E0517 00:13:56.135753 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.236265 kubelet[3607]: E0517 00:13:56.236242 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.336748 kubelet[3607]: E0517 00:13:56.336728 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.437165 kubelet[3607]: E0517 00:13:56.437139 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.537636 kubelet[3607]: E0517 00:13:56.537611 3607 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:56.917050 kubelet[3607]: I0517 00:13:56.917031 3607 apiserver.go:52] "Watching apiserver" May 17 00:13:56.925138 kubelet[3607]: I0517 00:13:56.925117 3607 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:13:56.954966 kubelet[3607]: W0517 00:13:56.954950 3607 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:13:57.109162 systemd[1]: Reloading requested from client PID 4034 ('systemctl') (unit session-9.scope)... May 17 00:13:57.109173 systemd[1]: Reloading... May 17 00:13:57.167603 zram_generator::config[4079]: No configuration found. May 17 00:13:57.259166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:13:57.343655 systemd[1]: Reloading finished in 234 ms. May 17 00:13:57.384295 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:57.400602 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:13:57.400866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:57.400918 systemd[1]: kubelet.service: Consumed 1.171s CPU time, 150.3M memory peak, 0B memory swap peak. May 17 00:13:57.421779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:57.525216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:57.528890 (kubelet)[4140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:13:57.559455 kubelet[4140]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:13:57.559455 kubelet[4140]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:13:57.559455 kubelet[4140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:13:57.559716 kubelet[4140]: I0517 00:13:57.559522 4140 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:13:57.564292 kubelet[4140]: I0517 00:13:57.564275 4140 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:13:57.564320 kubelet[4140]: I0517 00:13:57.564293 4140 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:13:57.564482 kubelet[4140]: I0517 00:13:57.564475 4140 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:13:57.565686 kubelet[4140]: I0517 00:13:57.565675 4140 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:13:57.567368 kubelet[4140]: I0517 00:13:57.567349 4140 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:13:57.570348 kubelet[4140]: E0517 00:13:57.570315 4140 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:13:57.570348 kubelet[4140]: I0517 00:13:57.570348 4140 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:13:57.588621 kubelet[4140]: I0517 00:13:57.588595 4140 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:13:57.588724 kubelet[4140]: I0517 00:13:57.588707 4140 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:13:57.588833 kubelet[4140]: I0517 00:13:57.588809 4140 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:13:57.588988 kubelet[4140]: I0517 00:13:57.588831 4140 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-3bfd76e738","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:13:57.589053 kubelet[4140]: I0517 00:13:57.588995 4140 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:13:57.589053 kubelet[4140]: I0517 00:13:57.589005 4140 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:13:57.589053 kubelet[4140]: I0517 00:13:57.589037 4140 state_mem.go:36] "Initialized new in-memory state store" May 17 00:13:57.589161 kubelet[4140]: I0517 00:13:57.589123 4140 kubelet.go:408] "Attempting to sync node with API server" May 17 00:13:57.589161 kubelet[4140]: I0517 00:13:57.589134 4140 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:13:57.589203 kubelet[4140]: I0517 00:13:57.589166 4140 kubelet.go:314] "Adding apiserver pod source" May 17 00:13:57.589203 kubelet[4140]: I0517 00:13:57.589177 4140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:13:57.589607 kubelet[4140]: I0517 00:13:57.589581 4140 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:13:57.590037 kubelet[4140]: I0517 00:13:57.590027 4140 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:13:57.590408 kubelet[4140]: I0517 00:13:57.590395 4140 server.go:1274] "Started kubelet" May 17 00:13:57.590477 kubelet[4140]: I0517 00:13:57.590440 4140 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:13:57.590502 kubelet[4140]: I0517 00:13:57.590469 4140 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:13:57.590723 kubelet[4140]: I0517 00:13:57.590660 4140 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:13:57.591319 kubelet[4140]: I0517 00:13:57.591306 4140 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:13:57.591361 kubelet[4140]: I0517 00:13:57.591326 4140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:13:57.591361 kubelet[4140]: I0517 00:13:57.591346 4140 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:13:57.591415 kubelet[4140]: I0517 00:13:57.591387 4140 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:13:57.591439 kubelet[4140]: E0517 00:13:57.591372 4140 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-3bfd76e738\" not found" May 17 00:13:57.591567 kubelet[4140]: I0517 00:13:57.591515 4140 reconciler.go:26] "Reconciler: start to sync state" May 17 00:13:57.591943 kubelet[4140]: I0517 00:13:57.591914 4140 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:13:57.592020 kubelet[4140]: E0517 00:13:57.591999 4140 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:13:57.592500 kubelet[4140]: I0517 00:13:57.592484 4140 server.go:449] "Adding debug handlers to kubelet server" May 17 00:13:57.593124 kubelet[4140]: I0517 00:13:57.593102 4140 factory.go:221] Registration of the containerd container factory successfully May 17 00:13:57.593165 kubelet[4140]: I0517 00:13:57.593126 4140 factory.go:221] Registration of the systemd container factory successfully May 17 00:13:57.598780 kubelet[4140]: I0517 00:13:57.598749 4140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:13:57.599769 kubelet[4140]: I0517 00:13:57.599753 4140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:13:57.599769 kubelet[4140]: I0517 00:13:57.599769 4140 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:13:57.599836 kubelet[4140]: I0517 00:13:57.599784 4140 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:13:57.599836 kubelet[4140]: E0517 00:13:57.599825 4140 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:13:57.622667 kubelet[4140]: I0517 00:13:57.622641 4140 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:13:57.622667 kubelet[4140]: I0517 00:13:57.622659 4140 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:13:57.622766 kubelet[4140]: I0517 00:13:57.622678 4140 state_mem.go:36] "Initialized new in-memory state store" May 17 00:13:57.622834 kubelet[4140]: I0517 00:13:57.622823 4140 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:13:57.622879 kubelet[4140]: I0517 00:13:57.622834 4140 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:13:57.622879 kubelet[4140]: I0517 00:13:57.622855 4140 policy_none.go:49] "None policy: Start" May 17 00:13:57.623319 kubelet[4140]: I0517 00:13:57.623303 4140 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:13:57.623361 kubelet[4140]: I0517 00:13:57.623325 4140 state_mem.go:35] "Initializing new in-memory state store" May 17 00:13:57.623465 kubelet[4140]: I0517 00:13:57.623456 4140 state_mem.go:75] "Updated machine memory state" May 17 00:13:57.626466 kubelet[4140]: I0517 00:13:57.626452 4140 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:13:57.626651 kubelet[4140]: I0517 00:13:57.626639 4140 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:13:57.626707 kubelet[4140]: I0517 00:13:57.626650 4140 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:13:57.626786 kubelet[4140]: I0517 00:13:57.626773 4140 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:13:57.703881 kubelet[4140]: W0517 00:13:57.703854 4140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:13:57.704207 kubelet[4140]: W0517 00:13:57.704189 4140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:13:57.704601 kubelet[4140]: W0517 00:13:57.704582 4140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:13:57.704649 kubelet[4140]: E0517 00:13:57.704632 4140 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.730209 kubelet[4140]: I0517 00:13:57.730193 4140 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.734353 kubelet[4140]: I0517 00:13:57.734335 4140 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.734405 kubelet[4140]: I0517 00:13:57.734395 4140 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893090 kubelet[4140]: I0517 00:13:57.893056 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2db7e8c69549782be9578c9ef83ff5b4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-3bfd76e738\" (UID: \"2db7e8c69549782be9578c9ef83ff5b4\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893090 kubelet[4140]: I0517 00:13:57.893088 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893226 kubelet[4140]: I0517 00:13:57.893109 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893226 kubelet[4140]: I0517 00:13:57.893127 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893226 kubelet[4140]: I0517 00:13:57.893149 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893226 kubelet[4140]: I0517 00:13:57.893164 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49874cfba95cca5951871ec6fc070229-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" (UID: \"49874cfba95cca5951871ec6fc070229\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893351 kubelet[4140]: I0517 00:13:57.893227 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49874cfba95cca5951871ec6fc070229-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" (UID: \"49874cfba95cca5951871ec6fc070229\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893351 kubelet[4140]: I0517 00:13:57.893290 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49874cfba95cca5951871ec6fc070229-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" (UID: \"49874cfba95cca5951871ec6fc070229\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:57.893351 kubelet[4140]: I0517 00:13:57.893317 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3e222b6c2d6df9b16b9dc3f89574cd4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-3bfd76e738\" (UID: \"d3e222b6c2d6df9b16b9dc3f89574cd4\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" May 17 00:13:58.590212 kubelet[4140]: I0517 00:13:58.590186 4140 apiserver.go:52] "Watching apiserver" May 17 00:13:58.613224 kubelet[4140]: W0517 00:13:58.613197 4140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:13:58.613350 kubelet[4140]: E0517 00:13:58.613251 4140 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-3bfd76e738\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" May 17 00:13:58.620147 kubelet[4140]: W0517 00:13:58.620118 4140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:13:58.620260 kubelet[4140]: E0517 00:13:58.620184 4140 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-n-3bfd76e738\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-3bfd76e738" May 17 00:13:58.630284 kubelet[4140]: I0517 00:13:58.630230 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-3bfd76e738" podStartSLOduration=2.630204321 podStartE2EDuration="2.630204321s" podCreationTimestamp="2025-05-17 00:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:13:58.630195561 +0000 UTC m=+1.098392777" watchObservedRunningTime="2025-05-17 00:13:58.630204321 +0000 UTC m=+1.098401497" May 17 00:13:58.635145 kubelet[4140]: I0517 00:13:58.635089 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-3bfd76e738" podStartSLOduration=1.635077827 podStartE2EDuration="1.635077827s" podCreationTimestamp="2025-05-17 00:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:13:58.635054628 +0000 UTC m=+1.103251844" watchObservedRunningTime="2025-05-17 00:13:58.635077827 +0000 UTC m=+1.103275043" May 17 00:13:58.648142 kubelet[4140]: I0517 00:13:58.648103 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-3bfd76e738" podStartSLOduration=1.648090936 podStartE2EDuration="1.648090936s" podCreationTimestamp="2025-05-17 00:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:13:58.642170981 +0000 UTC m=+1.110368197" watchObservedRunningTime="2025-05-17 00:13:58.648090936 +0000 UTC m=+1.116288112" May 17 00:13:58.691672 kubelet[4140]: I0517 00:13:58.691646 4140 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:14:02.869069 sshd[3978]: Connection closed by invalid user 34.94.79.79 port 34766 [preauth] May 17 00:14:02.871124 systemd[1]: sshd@8-147.28.151.230:22-34.94.79.79:34766.service: Deactivated successfully. May 17 00:14:04.495347 kubelet[4140]: I0517 00:14:04.495308 4140 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:14:04.495749 containerd[2668]: time="2025-05-17T00:14:04.495625899Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:14:04.495919 kubelet[4140]: I0517 00:14:04.495763 4140 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:14:05.215562 systemd[1]: Created slice kubepods-besteffort-pod8a03c9e7_f297_402d_9044_b32f2555e974.slice - libcontainer container kubepods-besteffort-pod8a03c9e7_f297_402d_9044_b32f2555e974.slice. May 17 00:14:05.238622 kubelet[4140]: I0517 00:14:05.238592 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a03c9e7-f297-402d-9044-b32f2555e974-xtables-lock\") pod \"kube-proxy-rb7cl\" (UID: \"8a03c9e7-f297-402d-9044-b32f2555e974\") " pod="kube-system/kube-proxy-rb7cl" May 17 00:14:05.238708 kubelet[4140]: I0517 00:14:05.238627 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a03c9e7-f297-402d-9044-b32f2555e974-kube-proxy\") pod \"kube-proxy-rb7cl\" (UID: \"8a03c9e7-f297-402d-9044-b32f2555e974\") " pod="kube-system/kube-proxy-rb7cl" May 17 00:14:05.238708 kubelet[4140]: I0517 00:14:05.238644 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a03c9e7-f297-402d-9044-b32f2555e974-lib-modules\") pod \"kube-proxy-rb7cl\" (UID: \"8a03c9e7-f297-402d-9044-b32f2555e974\") " pod="kube-system/kube-proxy-rb7cl" May 17 00:14:05.238708 kubelet[4140]: I0517 00:14:05.238664 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66jwj\" (UniqueName: \"kubernetes.io/projected/8a03c9e7-f297-402d-9044-b32f2555e974-kube-api-access-66jwj\") pod \"kube-proxy-rb7cl\" (UID: \"8a03c9e7-f297-402d-9044-b32f2555e974\") " pod="kube-system/kube-proxy-rb7cl" May 17 00:14:05.527695 kubelet[4140]: W0517 00:14:05.527493 4140 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-n-3bfd76e738" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.3-n-3bfd76e738' and this object May 17 00:14:05.527695 kubelet[4140]: E0517 00:14:05.527607 4140 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-n-3bfd76e738\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.3-n-3bfd76e738' and this object" logger="UnhandledError" May 17 00:14:05.531884 containerd[2668]: time="2025-05-17T00:14:05.531843915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rb7cl,Uid:8a03c9e7-f297-402d-9044-b32f2555e974,Namespace:kube-system,Attempt:0,}" May 17 00:14:05.537537 systemd[1]: Created slice kubepods-besteffort-pod1c87b584_df47_4381_8422_4e072d6dfaf8.slice - libcontainer container kubepods-besteffort-pod1c87b584_df47_4381_8422_4e072d6dfaf8.slice. May 17 00:14:05.540300 kubelet[4140]: I0517 00:14:05.540272 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c87b584-df47-4381-8422-4e072d6dfaf8-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-vzc2g\" (UID: \"1c87b584-df47-4381-8422-4e072d6dfaf8\") " pod="tigera-operator/tigera-operator-7c5755cdcb-vzc2g" May 17 00:14:05.540353 kubelet[4140]: I0517 00:14:05.540313 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8qq5\" (UniqueName: \"kubernetes.io/projected/1c87b584-df47-4381-8422-4e072d6dfaf8-kube-api-access-q8qq5\") pod \"tigera-operator-7c5755cdcb-vzc2g\" (UID: \"1c87b584-df47-4381-8422-4e072d6dfaf8\") " pod="tigera-operator/tigera-operator-7c5755cdcb-vzc2g" May 17 00:14:05.543872 containerd[2668]: time="2025-05-17T00:14:05.543782515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:05.543911 containerd[2668]: time="2025-05-17T00:14:05.543895830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:05.543935 containerd[2668]: time="2025-05-17T00:14:05.543908709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:05.544001 containerd[2668]: time="2025-05-17T00:14:05.543985026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:05.570807 systemd[1]: Started cri-containerd-cb3e241ab0d6e379464514d754a13bab0a9f1b5d8f58aed4b2919c0e441241c8.scope - libcontainer container cb3e241ab0d6e379464514d754a13bab0a9f1b5d8f58aed4b2919c0e441241c8. May 17 00:14:05.585844 containerd[2668]: time="2025-05-17T00:14:05.585814603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rb7cl,Uid:8a03c9e7-f297-402d-9044-b32f2555e974,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb3e241ab0d6e379464514d754a13bab0a9f1b5d8f58aed4b2919c0e441241c8\"" May 17 00:14:05.587795 containerd[2668]: time="2025-05-17T00:14:05.587769078Z" level=info msg="CreateContainer within sandbox \"cb3e241ab0d6e379464514d754a13bab0a9f1b5d8f58aed4b2919c0e441241c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:14:05.602057 containerd[2668]: time="2025-05-17T00:14:05.602013937Z" level=info msg="CreateContainer within sandbox \"cb3e241ab0d6e379464514d754a13bab0a9f1b5d8f58aed4b2919c0e441241c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b723efc86296201cdfd26003de7f928db7eda2d5ff2bd64d4e4cd60018b83b63\"" May 17 00:14:05.603606 containerd[2668]: time="2025-05-17T00:14:05.603573549Z" level=info msg="StartContainer for \"b723efc86296201cdfd26003de7f928db7eda2d5ff2bd64d4e4cd60018b83b63\"" May 17 00:14:05.631719 systemd[1]: Started cri-containerd-b723efc86296201cdfd26003de7f928db7eda2d5ff2bd64d4e4cd60018b83b63.scope - libcontainer container b723efc86296201cdfd26003de7f928db7eda2d5ff2bd64d4e4cd60018b83b63. May 17 00:14:05.651032 containerd[2668]: time="2025-05-17T00:14:05.651002682Z" level=info msg="StartContainer for \"b723efc86296201cdfd26003de7f928db7eda2d5ff2bd64d4e4cd60018b83b63\" returns successfully" May 17 00:14:06.138100 update_engine[2662]: I20250517 00:14:06.138032 2662 update_attempter.cc:509] Updating boot flags... May 17 00:14:06.167603 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4537) May 17 00:14:06.195605 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4537) May 17 00:14:06.223599 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (4537) May 17 00:14:06.631393 kubelet[4140]: I0517 00:14:06.631303 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rb7cl" podStartSLOduration=1.631286201 podStartE2EDuration="1.631286201s" podCreationTimestamp="2025-05-17 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:06.630856258 +0000 UTC m=+9.099053474" watchObservedRunningTime="2025-05-17 00:14:06.631286201 +0000 UTC m=+9.099483417" May 17 00:14:06.739951 containerd[2668]: time="2025-05-17T00:14:06.739909883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-vzc2g,Uid:1c87b584-df47-4381-8422-4e072d6dfaf8,Namespace:tigera-operator,Attempt:0,}" May 17 00:14:06.752640 containerd[2668]: time="2025-05-17T00:14:06.752570526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:06.752640 containerd[2668]: time="2025-05-17T00:14:06.752633803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:06.752700 containerd[2668]: time="2025-05-17T00:14:06.752645283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.752735 containerd[2668]: time="2025-05-17T00:14:06.752721919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.772707 systemd[1]: Started cri-containerd-35017845dcd6f76f1d46832a1ffe8a8b601a4fa6f73f7e833ec6643e2101c4fd.scope - libcontainer container 35017845dcd6f76f1d46832a1ffe8a8b601a4fa6f73f7e833ec6643e2101c4fd. May 17 00:14:06.795049 containerd[2668]: time="2025-05-17T00:14:06.795014152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-vzc2g,Uid:1c87b584-df47-4381-8422-4e072d6dfaf8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"35017845dcd6f76f1d46832a1ffe8a8b601a4fa6f73f7e833ec6643e2101c4fd\"" May 17 00:14:06.796172 containerd[2668]: time="2025-05-17T00:14:06.796149905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:14:07.950264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325291188.mount: Deactivated successfully. May 17 00:14:10.050684 containerd[2668]: time="2025-05-17T00:14:10.050640421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:10.051063 containerd[2668]: time="2025-05-17T00:14:10.050697580Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 17 00:14:10.051484 containerd[2668]: time="2025-05-17T00:14:10.051463835Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:10.053414 containerd[2668]: time="2025-05-17T00:14:10.053390335Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:10.054190 containerd[2668]: time="2025-05-17T00:14:10.054169150Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 3.257988526s" May 17 00:14:10.054215 containerd[2668]: time="2025-05-17T00:14:10.054196229Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 00:14:10.055917 containerd[2668]: time="2025-05-17T00:14:10.055893456Z" level=info msg="CreateContainer within sandbox \"35017845dcd6f76f1d46832a1ffe8a8b601a4fa6f73f7e833ec6643e2101c4fd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:14:10.060612 containerd[2668]: time="2025-05-17T00:14:10.060575868Z" level=info msg="CreateContainer within sandbox \"35017845dcd6f76f1d46832a1ffe8a8b601a4fa6f73f7e833ec6643e2101c4fd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"869eaf33c4b7324d05c4edfccc0ca65ea6a139720af1f640e93761be1cce7cc3\"" May 17 00:14:10.060982 containerd[2668]: time="2025-05-17T00:14:10.060956416Z" level=info msg="StartContainer for \"869eaf33c4b7324d05c4edfccc0ca65ea6a139720af1f640e93761be1cce7cc3\"" May 17 00:14:10.089779 systemd[1]: Started cri-containerd-869eaf33c4b7324d05c4edfccc0ca65ea6a139720af1f640e93761be1cce7cc3.scope - libcontainer container 869eaf33c4b7324d05c4edfccc0ca65ea6a139720af1f640e93761be1cce7cc3. May 17 00:14:10.106249 containerd[2668]: time="2025-05-17T00:14:10.106217907Z" level=info msg="StartContainer for \"869eaf33c4b7324d05c4edfccc0ca65ea6a139720af1f640e93761be1cce7cc3\" returns successfully" May 17 00:14:10.629452 kubelet[4140]: I0517 00:14:10.629398 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-vzc2g" podStartSLOduration=2.370383786 podStartE2EDuration="5.629384117s" podCreationTimestamp="2025-05-17 00:14:05 +0000 UTC" firstStartedPulling="2025-05-17 00:14:06.795846198 +0000 UTC m=+9.264043374" lastFinishedPulling="2025-05-17 00:14:10.054846489 +0000 UTC m=+12.523043705" observedRunningTime="2025-05-17 00:14:10.62928948 +0000 UTC m=+13.097486656" watchObservedRunningTime="2025-05-17 00:14:10.629384117 +0000 UTC m=+13.097581333" May 17 00:14:14.997370 sudo[2935]: pam_unix(sudo:session): session closed for user root May 17 00:14:15.059804 sshd[2932]: pam_unix(sshd:session): session closed for user core May 17 00:14:15.066161 systemd[1]: sshd@7-147.28.151.230:22-147.75.109.163:33436.service: Deactivated successfully. May 17 00:14:15.069300 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:14:15.069521 systemd[1]: session-9.scope: Consumed 7.822s CPU time, 176.1M memory peak, 0B memory swap peak. May 17 00:14:15.069975 systemd-logind[2650]: Session 9 logged out. Waiting for processes to exit. May 17 00:14:15.070876 systemd-logind[2650]: Removed session 9. May 17 00:14:19.354665 systemd[1]: Created slice kubepods-besteffort-pod79b6195a_bb50_4c10_9f9f_486f5c97227c.slice - libcontainer container kubepods-besteffort-pod79b6195a_bb50_4c10_9f9f_486f5c97227c.slice. May 17 00:14:19.424685 kubelet[4140]: I0517 00:14:19.424623 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28kc\" (UniqueName: \"kubernetes.io/projected/79b6195a-bb50-4c10-9f9f-486f5c97227c-kube-api-access-h28kc\") pod \"calico-typha-5cb5df87db-wm5v7\" (UID: \"79b6195a-bb50-4c10-9f9f-486f5c97227c\") " pod="calico-system/calico-typha-5cb5df87db-wm5v7" May 17 00:14:19.424685 kubelet[4140]: I0517 00:14:19.424669 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b6195a-bb50-4c10-9f9f-486f5c97227c-tigera-ca-bundle\") pod \"calico-typha-5cb5df87db-wm5v7\" (UID: \"79b6195a-bb50-4c10-9f9f-486f5c97227c\") " pod="calico-system/calico-typha-5cb5df87db-wm5v7" May 17 00:14:19.424685 kubelet[4140]: I0517 00:14:19.424688 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/79b6195a-bb50-4c10-9f9f-486f5c97227c-typha-certs\") pod \"calico-typha-5cb5df87db-wm5v7\" (UID: \"79b6195a-bb50-4c10-9f9f-486f5c97227c\") " pod="calico-system/calico-typha-5cb5df87db-wm5v7" May 17 00:14:19.664049 containerd[2668]: time="2025-05-17T00:14:19.663949331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb5df87db-wm5v7,Uid:79b6195a-bb50-4c10-9f9f-486f5c97227c,Namespace:calico-system,Attempt:0,}" May 17 00:14:19.676240 containerd[2668]: time="2025-05-17T00:14:19.676180315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:19.676288 containerd[2668]: time="2025-05-17T00:14:19.676239114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:19.676288 containerd[2668]: time="2025-05-17T00:14:19.676251154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:19.676346 containerd[2668]: time="2025-05-17T00:14:19.676331513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:19.694718 systemd[1]: Started cri-containerd-7d7f565f3504f417d1958d18cab2d48a6e3b6b396ec343246c5ae53930e4e309.scope - libcontainer container 7d7f565f3504f417d1958d18cab2d48a6e3b6b396ec343246c5ae53930e4e309. May 17 00:14:19.717280 containerd[2668]: time="2025-05-17T00:14:19.717246790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb5df87db-wm5v7,Uid:79b6195a-bb50-4c10-9f9f-486f5c97227c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d7f565f3504f417d1958d18cab2d48a6e3b6b396ec343246c5ae53930e4e309\"" May 17 00:14:19.718369 containerd[2668]: time="2025-05-17T00:14:19.718346731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:14:19.752419 systemd[1]: Created slice kubepods-besteffort-pode93cc29f_e851_459f_b86a_ddcb1f735276.slice - libcontainer container kubepods-besteffort-pode93cc29f_e851_459f_b86a_ddcb1f735276.slice. May 17 00:14:19.827810 kubelet[4140]: I0517 00:14:19.827777 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-var-lib-calico\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.827866 kubelet[4140]: I0517 00:14:19.827812 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e93cc29f-e851-459f-b86a-ddcb1f735276-tigera-ca-bundle\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.827866 kubelet[4140]: I0517 00:14:19.827828 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-cni-net-dir\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.827866 kubelet[4140]: I0517 00:14:19.827843 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e93cc29f-e851-459f-b86a-ddcb1f735276-node-certs\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828003 kubelet[4140]: I0517 00:14:19.827937 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-xtables-lock\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828003 kubelet[4140]: I0517 00:14:19.827987 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-cni-bin-dir\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828047 kubelet[4140]: I0517 00:14:19.828014 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-policysync\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828047 kubelet[4140]: I0517 00:14:19.828039 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55ml7\" (UniqueName: \"kubernetes.io/projected/e93cc29f-e851-459f-b86a-ddcb1f735276-kube-api-access-55ml7\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828092 kubelet[4140]: I0517 00:14:19.828062 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-flexvol-driver-host\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828092 kubelet[4140]: I0517 00:14:19.828078 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-lib-modules\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828145 kubelet[4140]: I0517 00:14:19.828095 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-cni-log-dir\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.828145 kubelet[4140]: I0517 00:14:19.828111 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e93cc29f-e851-459f-b86a-ddcb1f735276-var-run-calico\") pod \"calico-node-59q2p\" (UID: \"e93cc29f-e851-459f-b86a-ddcb1f735276\") " pod="calico-system/calico-node-59q2p" May 17 00:14:19.929641 kubelet[4140]: E0517 00:14:19.929558 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:19.929641 kubelet[4140]: W0517 00:14:19.929577 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:19.929641 kubelet[4140]: E0517 00:14:19.929611 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:19.930145 kubelet[4140]: E0517 00:14:19.930126 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:19.930145 kubelet[4140]: W0517 00:14:19.930143 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:19.930198 kubelet[4140]: E0517 00:14:19.930157 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:19.931742 kubelet[4140]: E0517 00:14:19.931723 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:19.931771 kubelet[4140]: W0517 00:14:19.931739 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:19.931771 kubelet[4140]: E0517 00:14:19.931754 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:19.938177 kubelet[4140]: E0517 00:14:19.938160 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:19.938205 kubelet[4140]: W0517 00:14:19.938175 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:19.938205 kubelet[4140]: E0517 00:14:19.938187 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.044731 kubelet[4140]: E0517 00:14:20.044693 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qg6j" podUID="ffe0c287-a727-4d33-b8b4-1067d384be58" May 17 00:14:20.054663 containerd[2668]: time="2025-05-17T00:14:20.054630493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-59q2p,Uid:e93cc29f-e851-459f-b86a-ddcb1f735276,Namespace:calico-system,Attempt:0,}" May 17 00:14:20.066750 containerd[2668]: time="2025-05-17T00:14:20.066685533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:20.066781 containerd[2668]: time="2025-05-17T00:14:20.066742092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:20.066781 containerd[2668]: time="2025-05-17T00:14:20.066754172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:20.066852 containerd[2668]: time="2025-05-17T00:14:20.066834731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:20.089836 systemd[1]: Started cri-containerd-e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd.scope - libcontainer container e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd. May 17 00:14:20.108801 containerd[2668]: time="2025-05-17T00:14:20.108762957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-59q2p,Uid:e93cc29f-e851-459f-b86a-ddcb1f735276,Namespace:calico-system,Attempt:0,} returns sandbox id \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\"" May 17 00:14:20.116036 kubelet[4140]: E0517 00:14:20.116014 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.116036 kubelet[4140]: W0517 00:14:20.116031 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.116125 kubelet[4140]: E0517 00:14:20.116048 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.116329 kubelet[4140]: E0517 00:14:20.116318 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.116329 kubelet[4140]: W0517 00:14:20.116326 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.116375 kubelet[4140]: E0517 00:14:20.116334 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.116543 kubelet[4140]: E0517 00:14:20.116532 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.116543 kubelet[4140]: W0517 00:14:20.116540 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.116584 kubelet[4140]: E0517 00:14:20.116548 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.116775 kubelet[4140]: E0517 00:14:20.116763 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.116775 kubelet[4140]: W0517 00:14:20.116772 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.116821 kubelet[4140]: E0517 00:14:20.116781 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.117002 kubelet[4140]: E0517 00:14:20.116990 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.117002 kubelet[4140]: W0517 00:14:20.116999 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.117046 kubelet[4140]: E0517 00:14:20.117007 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.117259 kubelet[4140]: E0517 00:14:20.117249 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.117259 kubelet[4140]: W0517 00:14:20.117257 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.117303 kubelet[4140]: E0517 00:14:20.117268 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.117469 kubelet[4140]: E0517 00:14:20.117460 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.117469 kubelet[4140]: W0517 00:14:20.117467 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.117509 kubelet[4140]: E0517 00:14:20.117474 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.117692 kubelet[4140]: E0517 00:14:20.117684 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.117714 kubelet[4140]: W0517 00:14:20.117692 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.117714 kubelet[4140]: E0517 00:14:20.117700 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.117864 kubelet[4140]: E0517 00:14:20.117856 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.117888 kubelet[4140]: W0517 00:14:20.117865 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.117888 kubelet[4140]: E0517 00:14:20.117872 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.118074 kubelet[4140]: E0517 00:14:20.118066 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.118097 kubelet[4140]: W0517 00:14:20.118074 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.118097 kubelet[4140]: E0517 00:14:20.118081 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.118264 kubelet[4140]: E0517 00:14:20.118256 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.118286 kubelet[4140]: W0517 00:14:20.118264 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.118286 kubelet[4140]: E0517 00:14:20.118270 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.118464 kubelet[4140]: E0517 00:14:20.118457 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.118485 kubelet[4140]: W0517 00:14:20.118464 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.118485 kubelet[4140]: E0517 00:14:20.118471 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.118662 kubelet[4140]: E0517 00:14:20.118651 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.118662 kubelet[4140]: W0517 00:14:20.118659 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.118705 kubelet[4140]: E0517 00:14:20.118666 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.118813 kubelet[4140]: E0517 00:14:20.118803 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.118813 kubelet[4140]: W0517 00:14:20.118810 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.118854 kubelet[4140]: E0517 00:14:20.118818 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.119012 kubelet[4140]: E0517 00:14:20.119002 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.119012 kubelet[4140]: W0517 00:14:20.119009 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.119055 kubelet[4140]: E0517 00:14:20.119015 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.119206 kubelet[4140]: E0517 00:14:20.119196 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.119206 kubelet[4140]: W0517 00:14:20.119204 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.119250 kubelet[4140]: E0517 00:14:20.119211 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.119431 kubelet[4140]: E0517 00:14:20.119422 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.119453 kubelet[4140]: W0517 00:14:20.119430 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.119453 kubelet[4140]: E0517 00:14:20.119440 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.119641 kubelet[4140]: E0517 00:14:20.119633 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.119662 kubelet[4140]: W0517 00:14:20.119640 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.119662 kubelet[4140]: E0517 00:14:20.119647 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.119789 kubelet[4140]: E0517 00:14:20.119781 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.119813 kubelet[4140]: W0517 00:14:20.119788 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.119813 kubelet[4140]: E0517 00:14:20.119795 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.119980 kubelet[4140]: E0517 00:14:20.119973 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.120003 kubelet[4140]: W0517 00:14:20.119980 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.120003 kubelet[4140]: E0517 00:14:20.119987 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.130301 kubelet[4140]: E0517 00:14:20.130285 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.130322 kubelet[4140]: W0517 00:14:20.130302 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.130322 kubelet[4140]: E0517 00:14:20.130316 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.130380 kubelet[4140]: I0517 00:14:20.130369 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q6jw\" (UniqueName: \"kubernetes.io/projected/ffe0c287-a727-4d33-b8b4-1067d384be58-kube-api-access-2q6jw\") pod \"csi-node-driver-2qg6j\" (UID: \"ffe0c287-a727-4d33-b8b4-1067d384be58\") " pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:20.130601 kubelet[4140]: E0517 00:14:20.130586 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.130627 kubelet[4140]: W0517 00:14:20.130601 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.130627 kubelet[4140]: E0517 00:14:20.130614 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.130667 kubelet[4140]: I0517 00:14:20.130628 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ffe0c287-a727-4d33-b8b4-1067d384be58-varrun\") pod \"csi-node-driver-2qg6j\" (UID: \"ffe0c287-a727-4d33-b8b4-1067d384be58\") " pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:20.130855 kubelet[4140]: E0517 00:14:20.130844 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.130882 kubelet[4140]: W0517 00:14:20.130855 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.130905 kubelet[4140]: E0517 00:14:20.130893 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.130925 kubelet[4140]: I0517 00:14:20.130908 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ffe0c287-a727-4d33-b8b4-1067d384be58-registration-dir\") pod \"csi-node-driver-2qg6j\" (UID: \"ffe0c287-a727-4d33-b8b4-1067d384be58\") " pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:20.131165 kubelet[4140]: E0517 00:14:20.131150 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.131189 kubelet[4140]: W0517 00:14:20.131166 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.131189 kubelet[4140]: E0517 00:14:20.131185 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.131384 kubelet[4140]: E0517 00:14:20.131376 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.131406 kubelet[4140]: W0517 00:14:20.131384 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.131406 kubelet[4140]: E0517 00:14:20.131395 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.131592 kubelet[4140]: E0517 00:14:20.131582 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.131617 kubelet[4140]: W0517 00:14:20.131596 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.131617 kubelet[4140]: E0517 00:14:20.131607 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.131786 kubelet[4140]: E0517 00:14:20.131778 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.131807 kubelet[4140]: W0517 00:14:20.131787 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.131807 kubelet[4140]: E0517 00:14:20.131797 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.132003 kubelet[4140]: E0517 00:14:20.131994 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.132023 kubelet[4140]: W0517 00:14:20.132003 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.132023 kubelet[4140]: E0517 00:14:20.132013 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.132063 kubelet[4140]: I0517 00:14:20.132034 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ffe0c287-a727-4d33-b8b4-1067d384be58-socket-dir\") pod \"csi-node-driver-2qg6j\" (UID: \"ffe0c287-a727-4d33-b8b4-1067d384be58\") " pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:20.132245 kubelet[4140]: E0517 00:14:20.132235 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.132268 kubelet[4140]: W0517 00:14:20.132246 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.132290 kubelet[4140]: E0517 00:14:20.132270 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.132312 kubelet[4140]: I0517 00:14:20.132299 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffe0c287-a727-4d33-b8b4-1067d384be58-kubelet-dir\") pod \"csi-node-driver-2qg6j\" (UID: \"ffe0c287-a727-4d33-b8b4-1067d384be58\") " pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:20.132446 kubelet[4140]: E0517 00:14:20.132438 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.132467 kubelet[4140]: W0517 00:14:20.132447 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.132491 kubelet[4140]: E0517 00:14:20.132470 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.132646 kubelet[4140]: E0517 00:14:20.132638 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.132667 kubelet[4140]: W0517 00:14:20.132649 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.132667 kubelet[4140]: E0517 00:14:20.132660 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.132856 kubelet[4140]: E0517 00:14:20.132844 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.132856 kubelet[4140]: W0517 00:14:20.132852 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.132940 kubelet[4140]: E0517 00:14:20.132862 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.133046 kubelet[4140]: E0517 00:14:20.133037 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.133046 kubelet[4140]: W0517 00:14:20.133045 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.133087 kubelet[4140]: E0517 00:14:20.133052 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.133229 kubelet[4140]: E0517 00:14:20.133221 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.133252 kubelet[4140]: W0517 00:14:20.133229 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.133252 kubelet[4140]: E0517 00:14:20.133236 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.133427 kubelet[4140]: E0517 00:14:20.133419 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.133450 kubelet[4140]: W0517 00:14:20.133427 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.133450 kubelet[4140]: E0517 00:14:20.133434 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.232783 kubelet[4140]: E0517 00:14:20.232715 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.232783 kubelet[4140]: W0517 00:14:20.232733 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.232783 kubelet[4140]: E0517 00:14:20.232750 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.233006 kubelet[4140]: E0517 00:14:20.232937 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.233006 kubelet[4140]: W0517 00:14:20.232946 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.233006 kubelet[4140]: E0517 00:14:20.232957 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.233192 kubelet[4140]: E0517 00:14:20.233177 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.233192 kubelet[4140]: W0517 00:14:20.233189 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.233241 kubelet[4140]: E0517 00:14:20.233202 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.233432 kubelet[4140]: E0517 00:14:20.233419 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.233432 kubelet[4140]: W0517 00:14:20.233430 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.233481 kubelet[4140]: E0517 00:14:20.233443 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.233630 kubelet[4140]: E0517 00:14:20.233618 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.233630 kubelet[4140]: W0517 00:14:20.233627 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.233677 kubelet[4140]: E0517 00:14:20.233638 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.233868 kubelet[4140]: E0517 00:14:20.233848 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.233895 kubelet[4140]: W0517 00:14:20.233866 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.233895 kubelet[4140]: E0517 00:14:20.233886 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.234068 kubelet[4140]: E0517 00:14:20.234057 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.234068 kubelet[4140]: W0517 00:14:20.234066 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.234113 kubelet[4140]: E0517 00:14:20.234076 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.234252 kubelet[4140]: E0517 00:14:20.234241 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.234252 kubelet[4140]: W0517 00:14:20.234249 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.234299 kubelet[4140]: E0517 00:14:20.234274 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.234402 kubelet[4140]: E0517 00:14:20.234394 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.234425 kubelet[4140]: W0517 00:14:20.234401 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.234425 kubelet[4140]: E0517 00:14:20.234417 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.234550 kubelet[4140]: E0517 00:14:20.234543 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.234576 kubelet[4140]: W0517 00:14:20.234550 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.234611 kubelet[4140]: E0517 00:14:20.234571 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.234694 kubelet[4140]: E0517 00:14:20.234685 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.234719 kubelet[4140]: W0517 00:14:20.234693 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.234719 kubelet[4140]: E0517 00:14:20.234709 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.234828 kubelet[4140]: E0517 00:14:20.234820 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.234855 kubelet[4140]: W0517 00:14:20.234828 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.234855 kubelet[4140]: E0517 00:14:20.234842 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.235009 kubelet[4140]: E0517 00:14:20.235000 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.235033 kubelet[4140]: W0517 00:14:20.235009 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.235033 kubelet[4140]: E0517 00:14:20.235021 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.235197 kubelet[4140]: E0517 00:14:20.235187 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.235223 kubelet[4140]: W0517 00:14:20.235197 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.235223 kubelet[4140]: E0517 00:14:20.235208 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.235365 kubelet[4140]: E0517 00:14:20.235356 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.235388 kubelet[4140]: W0517 00:14:20.235364 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.235388 kubelet[4140]: E0517 00:14:20.235375 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.235628 kubelet[4140]: E0517 00:14:20.235615 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.235657 kubelet[4140]: W0517 00:14:20.235629 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.235657 kubelet[4140]: E0517 00:14:20.235645 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.235798 kubelet[4140]: E0517 00:14:20.235786 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.235798 kubelet[4140]: W0517 00:14:20.235795 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.235847 kubelet[4140]: E0517 00:14:20.235806 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.236061 kubelet[4140]: E0517 00:14:20.236048 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.236061 kubelet[4140]: W0517 00:14:20.236059 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.236108 kubelet[4140]: E0517 00:14:20.236080 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.236227 kubelet[4140]: E0517 00:14:20.236216 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.236227 kubelet[4140]: W0517 00:14:20.236225 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.236270 kubelet[4140]: E0517 00:14:20.236240 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.236411 kubelet[4140]: E0517 00:14:20.236400 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.236411 kubelet[4140]: W0517 00:14:20.236408 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.236455 kubelet[4140]: E0517 00:14:20.236422 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.236584 kubelet[4140]: E0517 00:14:20.236574 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.236584 kubelet[4140]: W0517 00:14:20.236581 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.236633 kubelet[4140]: E0517 00:14:20.236598 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.236756 kubelet[4140]: E0517 00:14:20.236748 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.236780 kubelet[4140]: W0517 00:14:20.236757 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.236780 kubelet[4140]: E0517 00:14:20.236767 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.236941 kubelet[4140]: E0517 00:14:20.236932 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.236964 kubelet[4140]: W0517 00:14:20.236941 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.236964 kubelet[4140]: E0517 00:14:20.236951 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.237114 kubelet[4140]: E0517 00:14:20.237105 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.237137 kubelet[4140]: W0517 00:14:20.237114 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.237137 kubelet[4140]: E0517 00:14:20.237121 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.237290 kubelet[4140]: E0517 00:14:20.237279 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.237314 kubelet[4140]: W0517 00:14:20.237299 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.237314 kubelet[4140]: E0517 00:14:20.237307 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.246090 kubelet[4140]: E0517 00:14:20.246075 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.246119 kubelet[4140]: W0517 00:14:20.246089 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.246119 kubelet[4140]: E0517 00:14:20.246101 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.483817 containerd[2668]: time="2025-05-17T00:14:20.483758830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:20.483866 containerd[2668]: time="2025-05-17T00:14:20.483770550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 17 00:14:20.484501 containerd[2668]: time="2025-05-17T00:14:20.484482138Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:20.486220 containerd[2668]: time="2025-05-17T00:14:20.486192430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:20.486909 containerd[2668]: time="2025-05-17T00:14:20.486885298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 768.511327ms" May 17 00:14:20.486936 containerd[2668]: time="2025-05-17T00:14:20.486913698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 00:14:20.487571 containerd[2668]: time="2025-05-17T00:14:20.487549007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:14:20.492114 containerd[2668]: time="2025-05-17T00:14:20.492089732Z" level=info msg="CreateContainer within sandbox \"7d7f565f3504f417d1958d18cab2d48a6e3b6b396ec343246c5ae53930e4e309\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:14:20.497366 containerd[2668]: time="2025-05-17T00:14:20.497336645Z" level=info msg="CreateContainer within sandbox \"7d7f565f3504f417d1958d18cab2d48a6e3b6b396ec343246c5ae53930e4e309\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9fd868d0423e814929d6ed101bd91b4c019ed712d3bc480f15de809eab72f9d1\"" May 17 00:14:20.497635 containerd[2668]: time="2025-05-17T00:14:20.497608081Z" level=info msg="StartContainer for \"9fd868d0423e814929d6ed101bd91b4c019ed712d3bc480f15de809eab72f9d1\"" May 17 00:14:20.524720 systemd[1]: Started cri-containerd-9fd868d0423e814929d6ed101bd91b4c019ed712d3bc480f15de809eab72f9d1.scope - libcontainer container 9fd868d0423e814929d6ed101bd91b4c019ed712d3bc480f15de809eab72f9d1. May 17 00:14:20.558717 containerd[2668]: time="2025-05-17T00:14:20.558683270Z" level=info msg="StartContainer for \"9fd868d0423e814929d6ed101bd91b4c019ed712d3bc480f15de809eab72f9d1\" returns successfully" May 17 00:14:20.725218 kubelet[4140]: E0517 00:14:20.725082 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.725218 kubelet[4140]: W0517 00:14:20.725108 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.725218 kubelet[4140]: E0517 00:14:20.725128 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.725663 kubelet[4140]: E0517 00:14:20.725356 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.725663 kubelet[4140]: W0517 00:14:20.725366 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.725663 kubelet[4140]: E0517 00:14:20.725375 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.725663 kubelet[4140]: E0517 00:14:20.725598 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.725663 kubelet[4140]: W0517 00:14:20.725607 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.725663 kubelet[4140]: E0517 00:14:20.725615 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.726261 kubelet[4140]: E0517 00:14:20.725827 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.726261 kubelet[4140]: W0517 00:14:20.725835 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.726261 kubelet[4140]: E0517 00:14:20.725843 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.726261 kubelet[4140]: E0517 00:14:20.726083 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.726261 kubelet[4140]: W0517 00:14:20.726092 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.726261 kubelet[4140]: E0517 00:14:20.726100 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.726497 kubelet[4140]: E0517 00:14:20.726285 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.726497 kubelet[4140]: W0517 00:14:20.726293 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.726497 kubelet[4140]: E0517 00:14:20.726300 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.728490 kubelet[4140]: E0517 00:14:20.728398 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.728490 kubelet[4140]: W0517 00:14:20.728415 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.728490 kubelet[4140]: E0517 00:14:20.728430 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.730298 kubelet[4140]: E0517 00:14:20.729911 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.730298 kubelet[4140]: W0517 00:14:20.729923 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.730298 kubelet[4140]: E0517 00:14:20.729935 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.730298 kubelet[4140]: E0517 00:14:20.730145 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.730298 kubelet[4140]: W0517 00:14:20.730153 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.730298 kubelet[4140]: E0517 00:14:20.730161 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.730593 kubelet[4140]: E0517 00:14:20.730331 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.730593 kubelet[4140]: W0517 00:14:20.730338 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.730593 kubelet[4140]: E0517 00:14:20.730344 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.730593 kubelet[4140]: E0517 00:14:20.730557 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.730593 kubelet[4140]: W0517 00:14:20.730564 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.730593 kubelet[4140]: E0517 00:14:20.730571 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.730786 kubelet[4140]: E0517 00:14:20.730777 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.730786 kubelet[4140]: W0517 00:14:20.730784 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.730830 kubelet[4140]: E0517 00:14:20.730791 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.731012 kubelet[4140]: E0517 00:14:20.731003 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.731037 kubelet[4140]: W0517 00:14:20.731011 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.731037 kubelet[4140]: E0517 00:14:20.731018 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.731235 kubelet[4140]: E0517 00:14:20.731227 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.731235 kubelet[4140]: W0517 00:14:20.731234 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.731279 kubelet[4140]: E0517 00:14:20.731241 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.731433 kubelet[4140]: E0517 00:14:20.731424 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.731460 kubelet[4140]: W0517 00:14:20.731433 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.731460 kubelet[4140]: E0517 00:14:20.731441 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.736801 kubelet[4140]: E0517 00:14:20.736752 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.736801 kubelet[4140]: W0517 00:14:20.736767 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.736801 kubelet[4140]: E0517 00:14:20.736779 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.737017 kubelet[4140]: E0517 00:14:20.737008 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.737039 kubelet[4140]: W0517 00:14:20.737017 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.737039 kubelet[4140]: E0517 00:14:20.737028 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.737252 kubelet[4140]: E0517 00:14:20.737244 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.737274 kubelet[4140]: W0517 00:14:20.737253 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.737274 kubelet[4140]: E0517 00:14:20.737264 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.737497 kubelet[4140]: E0517 00:14:20.737488 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.737518 kubelet[4140]: W0517 00:14:20.737497 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.737518 kubelet[4140]: E0517 00:14:20.737510 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.737716 kubelet[4140]: E0517 00:14:20.737699 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.737742 kubelet[4140]: W0517 00:14:20.737717 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.737742 kubelet[4140]: E0517 00:14:20.737732 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.737948 kubelet[4140]: E0517 00:14:20.737941 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.737969 kubelet[4140]: W0517 00:14:20.737949 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.737969 kubelet[4140]: E0517 00:14:20.737959 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.738163 kubelet[4140]: E0517 00:14:20.738156 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.738189 kubelet[4140]: W0517 00:14:20.738163 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.738189 kubelet[4140]: E0517 00:14:20.738172 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.738366 kubelet[4140]: E0517 00:14:20.738358 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.738388 kubelet[4140]: W0517 00:14:20.738366 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.738410 kubelet[4140]: E0517 00:14:20.738385 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.738568 kubelet[4140]: E0517 00:14:20.738561 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.738596 kubelet[4140]: W0517 00:14:20.738568 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.738616 kubelet[4140]: E0517 00:14:20.738594 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.738714 kubelet[4140]: E0517 00:14:20.738706 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.738734 kubelet[4140]: W0517 00:14:20.738713 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.738734 kubelet[4140]: E0517 00:14:20.738724 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.738962 kubelet[4140]: E0517 00:14:20.738951 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.738983 kubelet[4140]: W0517 00:14:20.738963 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.738983 kubelet[4140]: E0517 00:14:20.738976 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.739173 kubelet[4140]: E0517 00:14:20.739166 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.739196 kubelet[4140]: W0517 00:14:20.739173 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.739196 kubelet[4140]: E0517 00:14:20.739183 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.739412 kubelet[4140]: E0517 00:14:20.739405 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.739434 kubelet[4140]: W0517 00:14:20.739412 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.739434 kubelet[4140]: E0517 00:14:20.739423 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.739724 kubelet[4140]: E0517 00:14:20.739714 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.739749 kubelet[4140]: W0517 00:14:20.739725 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.739749 kubelet[4140]: E0517 00:14:20.739740 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.739938 kubelet[4140]: E0517 00:14:20.739930 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.739959 kubelet[4140]: W0517 00:14:20.739938 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.739959 kubelet[4140]: E0517 00:14:20.739948 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.740186 kubelet[4140]: E0517 00:14:20.740175 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.740207 kubelet[4140]: W0517 00:14:20.740187 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.740207 kubelet[4140]: E0517 00:14:20.740200 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.740426 kubelet[4140]: E0517 00:14:20.740417 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.740447 kubelet[4140]: W0517 00:14:20.740426 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.740447 kubelet[4140]: E0517 00:14:20.740434 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.740752 kubelet[4140]: E0517 00:14:20.740742 4140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:14:20.740773 kubelet[4140]: W0517 00:14:20.740752 4140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:14:20.740773 kubelet[4140]: E0517 00:14:20.740760 4140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:14:20.763479 containerd[2668]: time="2025-05-17T00:14:20.763450241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:20.763722 containerd[2668]: time="2025-05-17T00:14:20.763477440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 17 00:14:20.764202 containerd[2668]: time="2025-05-17T00:14:20.764180989Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:20.765876 containerd[2668]: time="2025-05-17T00:14:20.765852921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:20.766529 containerd[2668]: time="2025-05-17T00:14:20.766501510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 278.922663ms" May 17 00:14:20.766582 containerd[2668]: time="2025-05-17T00:14:20.766533830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 00:14:20.767943 containerd[2668]: time="2025-05-17T00:14:20.767920927Z" level=info msg="CreateContainer within sandbox \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:14:20.774429 containerd[2668]: time="2025-05-17T00:14:20.774403859Z" level=info msg="CreateContainer within sandbox \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911\"" May 17 00:14:20.774679 containerd[2668]: time="2025-05-17T00:14:20.774662095Z" level=info msg="StartContainer for \"cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911\"" May 17 00:14:20.806779 systemd[1]: Started cri-containerd-cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911.scope - libcontainer container cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911. May 17 00:14:20.836936 systemd[1]: cri-containerd-cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911.scope: Deactivated successfully. May 17 00:14:20.837066 containerd[2668]: time="2025-05-17T00:14:20.837032263Z" level=info msg="StartContainer for \"cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911\" returns successfully" May 17 00:14:20.996395 containerd[2668]: time="2025-05-17T00:14:20.996287347Z" level=info msg="shim disconnected" id=cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911 namespace=k8s.io May 17 00:14:20.996395 containerd[2668]: time="2025-05-17T00:14:20.996337546Z" level=warning msg="cleaning up after shim disconnected" id=cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911 namespace=k8s.io May 17 00:14:20.996395 containerd[2668]: time="2025-05-17T00:14:20.996347386Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:21.529418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb4612818ee1874a953c7bb58c07f54785c11919e2ee5074e964a14255010911-rootfs.mount: Deactivated successfully. May 17 00:14:21.600405 kubelet[4140]: E0517 00:14:21.600364 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qg6j" podUID="ffe0c287-a727-4d33-b8b4-1067d384be58" May 17 00:14:21.638558 containerd[2668]: time="2025-05-17T00:14:21.638526136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:14:21.659083 kubelet[4140]: I0517 00:14:21.659031 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cb5df87db-wm5v7" podStartSLOduration=1.889711664 podStartE2EDuration="2.659017218s" podCreationTimestamp="2025-05-17 00:14:19 +0000 UTC" firstStartedPulling="2025-05-17 00:14:19.718130855 +0000 UTC m=+22.186328071" lastFinishedPulling="2025-05-17 00:14:20.487436449 +0000 UTC m=+22.955633625" observedRunningTime="2025-05-17 00:14:20.643313989 +0000 UTC m=+23.111511165" watchObservedRunningTime="2025-05-17 00:14:21.659017218 +0000 UTC m=+24.127214434" May 17 00:14:22.481373 containerd[2668]: time="2025-05-17T00:14:22.481334444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:22.481732 containerd[2668]: time="2025-05-17T00:14:22.481388083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 17 00:14:22.482060 containerd[2668]: time="2025-05-17T00:14:22.482038713Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:22.483910 containerd[2668]: time="2025-05-17T00:14:22.483887327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:22.484708 containerd[2668]: time="2025-05-17T00:14:22.484682715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 846.121979ms" May 17 00:14:22.484734 containerd[2668]: time="2025-05-17T00:14:22.484715074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 00:14:22.486285 containerd[2668]: time="2025-05-17T00:14:22.486265212Z" level=info msg="CreateContainer within sandbox \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:14:22.491752 containerd[2668]: time="2025-05-17T00:14:22.491725692Z" level=info msg="CreateContainer within sandbox \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063\"" May 17 00:14:22.492091 containerd[2668]: time="2025-05-17T00:14:22.492073207Z" level=info msg="StartContainer for \"fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063\"" May 17 00:14:22.530719 systemd[1]: Started cri-containerd-fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063.scope - libcontainer container fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063. May 17 00:14:22.549047 containerd[2668]: time="2025-05-17T00:14:22.549015019Z" level=info msg="StartContainer for \"fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063\" returns successfully" May 17 00:14:22.944943 containerd[2668]: time="2025-05-17T00:14:22.944903020Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:14:22.946803 systemd[1]: cri-containerd-fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063.scope: Deactivated successfully. May 17 00:14:22.961462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063-rootfs.mount: Deactivated successfully. May 17 00:14:22.966179 kubelet[4140]: I0517 00:14:22.966160 4140 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:14:22.985846 systemd[1]: Created slice kubepods-burstable-pod0e09e03c_e3f6_450c_916f_1c8c1d6def9e.slice - libcontainer container kubepods-burstable-pod0e09e03c_e3f6_450c_916f_1c8c1d6def9e.slice. May 17 00:14:22.989606 systemd[1]: Created slice kubepods-burstable-pod46e0c7e1_4d50_450c_9360_f91ac6e40f72.slice - libcontainer container kubepods-burstable-pod46e0c7e1_4d50_450c_9360_f91ac6e40f72.slice. May 17 00:14:22.993225 systemd[1]: Created slice kubepods-besteffort-pod8e476acd_d55f_474e_8792_7a4ffbe093e3.slice - libcontainer container kubepods-besteffort-pod8e476acd_d55f_474e_8792_7a4ffbe093e3.slice. May 17 00:14:22.997027 systemd[1]: Created slice kubepods-besteffort-pod9a7c4f20_f66d_476e_aedc_1d3cd8a51bc2.slice - libcontainer container kubepods-besteffort-pod9a7c4f20_f66d_476e_aedc_1d3cd8a51bc2.slice. May 17 00:14:23.000972 systemd[1]: Created slice kubepods-besteffort-podc38caedc_750a_4223_9bb4_03b5231f627f.slice - libcontainer container kubepods-besteffort-podc38caedc_750a_4223_9bb4_03b5231f627f.slice. May 17 00:14:23.004555 systemd[1]: Created slice kubepods-besteffort-podc98c37c4_9226_427e_bd6e_d36379e173df.slice - libcontainer container kubepods-besteffort-podc98c37c4_9226_427e_bd6e_d36379e173df.slice. May 17 00:14:23.008340 systemd[1]: Created slice kubepods-besteffort-pod92b8742a_1089_44d9_aa7e_9b92fa6f958f.slice - libcontainer container kubepods-besteffort-pod92b8742a_1089_44d9_aa7e_9b92fa6f958f.slice. May 17 00:14:23.049616 kubelet[4140]: I0517 00:14:23.049574 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c38caedc-750a-4223-9bb4-03b5231f627f-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-q5w5n\" (UID: \"c38caedc-750a-4223-9bb4-03b5231f627f\") " pod="calico-system/goldmane-8f77d7b6c-q5w5n" May 17 00:14:23.049723 kubelet[4140]: I0517 00:14:23.049629 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2-calico-apiserver-certs\") pod \"calico-apiserver-84f7cc55c9-7jvwm\" (UID: \"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2\") " pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" May 17 00:14:23.049723 kubelet[4140]: I0517 00:14:23.049674 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92b8742a-1089-44d9-aa7e-9b92fa6f958f-tigera-ca-bundle\") pod \"calico-kube-controllers-57674c48cd-bjqp6\" (UID: \"92b8742a-1089-44d9-aa7e-9b92fa6f958f\") " pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" May 17 00:14:23.049723 kubelet[4140]: I0517 00:14:23.049692 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46e0c7e1-4d50-450c-9360-f91ac6e40f72-config-volume\") pod \"coredns-7c65d6cfc9-28zzb\" (UID: \"46e0c7e1-4d50-450c-9360-f91ac6e40f72\") " pod="kube-system/coredns-7c65d6cfc9-28zzb" May 17 00:14:23.049723 kubelet[4140]: I0517 00:14:23.049709 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxxd4\" (UniqueName: \"kubernetes.io/projected/9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2-kube-api-access-nxxd4\") pod \"calico-apiserver-84f7cc55c9-7jvwm\" (UID: \"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2\") " pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" May 17 00:14:23.049885 kubelet[4140]: I0517 00:14:23.049726 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e09e03c-e3f6-450c-916f-1c8c1d6def9e-config-volume\") pod \"coredns-7c65d6cfc9-st4g5\" (UID: \"0e09e03c-e3f6-450c-916f-1c8c1d6def9e\") " pod="kube-system/coredns-7c65d6cfc9-st4g5" May 17 00:14:23.049885 kubelet[4140]: I0517 00:14:23.049744 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbv5\" (UniqueName: \"kubernetes.io/projected/0e09e03c-e3f6-450c-916f-1c8c1d6def9e-kube-api-access-rfbv5\") pod \"coredns-7c65d6cfc9-st4g5\" (UID: \"0e09e03c-e3f6-450c-916f-1c8c1d6def9e\") " pod="kube-system/coredns-7c65d6cfc9-st4g5" May 17 00:14:23.049885 kubelet[4140]: I0517 00:14:23.049794 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-backend-key-pair\") pod \"whisker-78fd6f9455-vxl45\" (UID: \"c98c37c4-9226-427e-bd6e-d36379e173df\") " pod="calico-system/whisker-78fd6f9455-vxl45" May 17 00:14:23.049885 kubelet[4140]: I0517 00:14:23.049853 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn29r\" (UniqueName: \"kubernetes.io/projected/92b8742a-1089-44d9-aa7e-9b92fa6f958f-kube-api-access-qn29r\") pod \"calico-kube-controllers-57674c48cd-bjqp6\" (UID: \"92b8742a-1089-44d9-aa7e-9b92fa6f958f\") " pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" May 17 00:14:23.049885 kubelet[4140]: I0517 00:14:23.049876 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-ca-bundle\") pod \"whisker-78fd6f9455-vxl45\" (UID: \"c98c37c4-9226-427e-bd6e-d36379e173df\") " pod="calico-system/whisker-78fd6f9455-vxl45" May 17 00:14:23.050078 kubelet[4140]: I0517 00:14:23.049892 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqzx\" (UniqueName: \"kubernetes.io/projected/c98c37c4-9226-427e-bd6e-d36379e173df-kube-api-access-rjqzx\") pod \"whisker-78fd6f9455-vxl45\" (UID: \"c98c37c4-9226-427e-bd6e-d36379e173df\") " pod="calico-system/whisker-78fd6f9455-vxl45" May 17 00:14:23.050078 kubelet[4140]: I0517 00:14:23.049911 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jznfq\" (UniqueName: \"kubernetes.io/projected/46e0c7e1-4d50-450c-9360-f91ac6e40f72-kube-api-access-jznfq\") pod \"coredns-7c65d6cfc9-28zzb\" (UID: \"46e0c7e1-4d50-450c-9360-f91ac6e40f72\") " pod="kube-system/coredns-7c65d6cfc9-28zzb" May 17 00:14:23.050078 kubelet[4140]: I0517 00:14:23.049930 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38caedc-750a-4223-9bb4-03b5231f627f-config\") pod \"goldmane-8f77d7b6c-q5w5n\" (UID: \"c38caedc-750a-4223-9bb4-03b5231f627f\") " pod="calico-system/goldmane-8f77d7b6c-q5w5n" May 17 00:14:23.050078 kubelet[4140]: I0517 00:14:23.049945 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c38caedc-750a-4223-9bb4-03b5231f627f-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-q5w5n\" (UID: \"c38caedc-750a-4223-9bb4-03b5231f627f\") " pod="calico-system/goldmane-8f77d7b6c-q5w5n" May 17 00:14:23.050078 kubelet[4140]: I0517 00:14:23.049961 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhx2\" (UniqueName: \"kubernetes.io/projected/c38caedc-750a-4223-9bb4-03b5231f627f-kube-api-access-9jhx2\") pod \"goldmane-8f77d7b6c-q5w5n\" (UID: \"c38caedc-750a-4223-9bb4-03b5231f627f\") " pod="calico-system/goldmane-8f77d7b6c-q5w5n" May 17 00:14:23.050230 kubelet[4140]: I0517 00:14:23.049979 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e476acd-d55f-474e-8792-7a4ffbe093e3-calico-apiserver-certs\") pod \"calico-apiserver-84f7cc55c9-spxxt\" (UID: \"8e476acd-d55f-474e-8792-7a4ffbe093e3\") " pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" May 17 00:14:23.050230 kubelet[4140]: I0517 00:14:23.050005 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrswk\" (UniqueName: \"kubernetes.io/projected/8e476acd-d55f-474e-8792-7a4ffbe093e3-kube-api-access-lrswk\") pod \"calico-apiserver-84f7cc55c9-spxxt\" (UID: \"8e476acd-d55f-474e-8792-7a4ffbe093e3\") " pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" May 17 00:14:23.082097 containerd[2668]: time="2025-05-17T00:14:23.082042018Z" level=info msg="shim disconnected" id=fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063 namespace=k8s.io May 17 00:14:23.082097 containerd[2668]: time="2025-05-17T00:14:23.082096338Z" level=warning msg="cleaning up after shim disconnected" id=fd082f3d22f30718b8e5e345264a2f1a9487ef7a023b750860afe65db2cac063 namespace=k8s.io May 17 00:14:23.082185 containerd[2668]: time="2025-05-17T00:14:23.082104297Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:23.288998 containerd[2668]: time="2025-05-17T00:14:23.288884557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-st4g5,Uid:0e09e03c-e3f6-450c-916f-1c8c1d6def9e,Namespace:kube-system,Attempt:0,}" May 17 00:14:23.292498 containerd[2668]: time="2025-05-17T00:14:23.292449109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28zzb,Uid:46e0c7e1-4d50-450c-9360-f91ac6e40f72,Namespace:kube-system,Attempt:0,}" May 17 00:14:23.295980 containerd[2668]: time="2025-05-17T00:14:23.295946661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-spxxt,Uid:8e476acd-d55f-474e-8792-7a4ffbe093e3,Namespace:calico-apiserver,Attempt:0,}" May 17 00:14:23.299480 containerd[2668]: time="2025-05-17T00:14:23.299439894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-7jvwm,Uid:9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2,Namespace:calico-apiserver,Attempt:0,}" May 17 00:14:23.302970 containerd[2668]: time="2025-05-17T00:14:23.302946126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-q5w5n,Uid:c38caedc-750a-4223-9bb4-03b5231f627f,Namespace:calico-system,Attempt:0,}" May 17 00:14:23.307462 containerd[2668]: time="2025-05-17T00:14:23.307421585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fd6f9455-vxl45,Uid:c98c37c4-9226-427e-bd6e-d36379e173df,Namespace:calico-system,Attempt:0,}" May 17 00:14:23.311158 containerd[2668]: time="2025-05-17T00:14:23.311121694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57674c48cd-bjqp6,Uid:92b8742a-1089-44d9-aa7e-9b92fa6f958f,Namespace:calico-system,Attempt:0,}" May 17 00:14:23.363986 containerd[2668]: time="2025-05-17T00:14:23.363940094Z" level=error msg="Failed to destroy network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364318 containerd[2668]: time="2025-05-17T00:14:23.364282849Z" level=error msg="Failed to destroy network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364386 containerd[2668]: time="2025-05-17T00:14:23.364294609Z" level=error msg="encountered an error cleaning up failed sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364433 containerd[2668]: time="2025-05-17T00:14:23.364414767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57674c48cd-bjqp6,Uid:92b8742a-1089-44d9-aa7e-9b92fa6f958f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364656 containerd[2668]: time="2025-05-17T00:14:23.364631484Z" level=error msg="encountered an error cleaning up failed sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364701 containerd[2668]: time="2025-05-17T00:14:23.364681724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28zzb,Uid:46e0c7e1-4d50-450c-9360-f91ac6e40f72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364770 containerd[2668]: time="2025-05-17T00:14:23.364736323Z" level=error msg="Failed to destroy network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.364826 containerd[2668]: time="2025-05-17T00:14:23.364642484Z" level=error msg="Failed to destroy network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365048 kubelet[4140]: E0517 00:14:23.364603 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365089 containerd[2668]: time="2025-05-17T00:14:23.365049119Z" level=error msg="Failed to destroy network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365214 containerd[2668]: time="2025-05-17T00:14:23.365187317Z" level=error msg="Failed to destroy network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365245 kubelet[4140]: E0517 00:14:23.365217 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365281 kubelet[4140]: E0517 00:14:23.365261 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-28zzb" May 17 00:14:23.365307 kubelet[4140]: E0517 00:14:23.365283 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-28zzb" May 17 00:14:23.365717 kubelet[4140]: E0517 00:14:23.365316 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" May 17 00:14:23.365717 kubelet[4140]: E0517 00:14:23.365329 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-28zzb_kube-system(46e0c7e1-4d50-450c-9360-f91ac6e40f72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-28zzb_kube-system(46e0c7e1-4d50-450c-9360-f91ac6e40f72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-28zzb" podUID="46e0c7e1-4d50-450c-9360-f91ac6e40f72" May 17 00:14:23.365717 kubelet[4140]: E0517 00:14:23.365361 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365515832Z" level=error msg="Failed to destroy network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365578712Z" level=error msg="encountered an error cleaning up failed sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365639391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fd6f9455-vxl45,Uid:c98c37c4-9226-427e-bd6e-d36379e173df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365718350Z" level=error msg="encountered an error cleaning up failed sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365759389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-q5w5n,Uid:c38caedc-750a-4223-9bb4-03b5231f627f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365790989Z" level=error msg="encountered an error cleaning up failed sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.365868 containerd[2668]: time="2025-05-17T00:14:23.365836388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-spxxt,Uid:8e476acd-d55f-474e-8792-7a4ffbe093e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366094 kubelet[4140]: E0517 00:14:23.365413 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57674c48cd-bjqp6_calico-system(92b8742a-1089-44d9-aa7e-9b92fa6f958f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57674c48cd-bjqp6_calico-system(92b8742a-1089-44d9-aa7e-9b92fa6f958f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" podUID="92b8742a-1089-44d9-aa7e-9b92fa6f958f" May 17 00:14:23.366094 kubelet[4140]: E0517 00:14:23.365783 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366094 kubelet[4140]: E0517 00:14:23.365828 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78fd6f9455-vxl45" May 17 00:14:23.366181 containerd[2668]: time="2025-05-17T00:14:23.365958186Z" level=error msg="encountered an error cleaning up failed sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366181 containerd[2668]: time="2025-05-17T00:14:23.365993306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-st4g5,Uid:0e09e03c-e3f6-450c-916f-1c8c1d6def9e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366229 kubelet[4140]: E0517 00:14:23.365844 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78fd6f9455-vxl45" May 17 00:14:23.366229 kubelet[4140]: E0517 00:14:23.365876 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78fd6f9455-vxl45_calico-system(c98c37c4-9226-427e-bd6e-d36379e173df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78fd6f9455-vxl45_calico-system(c98c37c4-9226-427e-bd6e-d36379e173df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78fd6f9455-vxl45" podUID="c98c37c4-9226-427e-bd6e-d36379e173df" May 17 00:14:23.366229 kubelet[4140]: E0517 00:14:23.365897 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366309 kubelet[4140]: E0517 00:14:23.365942 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-q5w5n" May 17 00:14:23.366309 kubelet[4140]: E0517 00:14:23.365951 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366309 kubelet[4140]: E0517 00:14:23.365958 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-q5w5n" May 17 00:14:23.366373 kubelet[4140]: E0517 00:14:23.366034 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:14:23.366373 kubelet[4140]: E0517 00:14:23.366037 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" May 17 00:14:23.366373 kubelet[4140]: E0517 00:14:23.366082 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" May 17 00:14:23.366492 kubelet[4140]: E0517 00:14:23.366123 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84f7cc55c9-spxxt_calico-apiserver(8e476acd-d55f-474e-8792-7a4ffbe093e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84f7cc55c9-spxxt_calico-apiserver(8e476acd-d55f-474e-8792-7a4ffbe093e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" podUID="8e476acd-d55f-474e-8792-7a4ffbe093e3" May 17 00:14:23.366492 kubelet[4140]: E0517 00:14:23.366135 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366492 kubelet[4140]: E0517 00:14:23.366297 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-st4g5" May 17 00:14:23.366569 containerd[2668]: time="2025-05-17T00:14:23.366419420Z" level=error msg="encountered an error cleaning up failed sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366569 containerd[2668]: time="2025-05-17T00:14:23.366501259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-7jvwm,Uid:9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.366646 kubelet[4140]: E0517 00:14:23.366324 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-st4g5" May 17 00:14:23.366646 kubelet[4140]: E0517 00:14:23.366355 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-st4g5_kube-system(0e09e03c-e3f6-450c-916f-1c8c1d6def9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-st4g5_kube-system(0e09e03c-e3f6-450c-916f-1c8c1d6def9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-st4g5" podUID="0e09e03c-e3f6-450c-916f-1c8c1d6def9e" May 17 00:14:23.367288 kubelet[4140]: E0517 00:14:23.367262 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.367314 kubelet[4140]: E0517 00:14:23.367302 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" May 17 00:14:23.367335 kubelet[4140]: E0517 00:14:23.367318 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" May 17 00:14:23.367369 kubelet[4140]: E0517 00:14:23.367349 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84f7cc55c9-7jvwm_calico-apiserver(9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84f7cc55c9-7jvwm_calico-apiserver(9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" podUID="9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2" May 17 00:14:23.605227 systemd[1]: Created slice kubepods-besteffort-podffe0c287_a727_4d33_b8b4_1067d384be58.slice - libcontainer container kubepods-besteffort-podffe0c287_a727_4d33_b8b4_1067d384be58.slice. May 17 00:14:23.606898 containerd[2668]: time="2025-05-17T00:14:23.606865981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2qg6j,Uid:ffe0c287-a727-4d33-b8b4-1067d384be58,Namespace:calico-system,Attempt:0,}" May 17 00:14:23.642757 kubelet[4140]: I0517 00:14:23.642731 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:23.643413 containerd[2668]: time="2025-05-17T00:14:23.643211685Z" level=info msg="StopPodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\"" May 17 00:14:23.643413 containerd[2668]: time="2025-05-17T00:14:23.643372323Z" level=info msg="Ensure that sandbox 876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88 in task-service has been cleanup successfully" May 17 00:14:23.643525 kubelet[4140]: I0517 00:14:23.643399 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:23.643788 containerd[2668]: time="2025-05-17T00:14:23.643768158Z" level=info msg="StopPodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\"" May 17 00:14:23.643933 containerd[2668]: time="2025-05-17T00:14:23.643919836Z" level=info msg="Ensure that sandbox 1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4 in task-service has been cleanup successfully" May 17 00:14:23.644075 kubelet[4140]: I0517 00:14:23.644060 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:23.644494 containerd[2668]: time="2025-05-17T00:14:23.644474908Z" level=info msg="StopPodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\"" May 17 00:14:23.644625 containerd[2668]: time="2025-05-17T00:14:23.644611026Z" level=info msg="Ensure that sandbox b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c in task-service has been cleanup successfully" May 17 00:14:23.645070 kubelet[4140]: I0517 00:14:23.645056 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:23.645513 containerd[2668]: time="2025-05-17T00:14:23.645495654Z" level=info msg="StopPodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\"" May 17 00:14:23.645659 containerd[2668]: time="2025-05-17T00:14:23.645644892Z" level=info msg="Ensure that sandbox 9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed in task-service has been cleanup successfully" May 17 00:14:23.645852 kubelet[4140]: I0517 00:14:23.645839 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:23.646200 containerd[2668]: time="2025-05-17T00:14:23.646178445Z" level=info msg="StopPodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\"" May 17 00:14:23.646333 containerd[2668]: time="2025-05-17T00:14:23.646318923Z" level=info msg="Ensure that sandbox 79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e in task-service has been cleanup successfully" May 17 00:14:23.646466 kubelet[4140]: I0517 00:14:23.646453 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:23.646814 containerd[2668]: time="2025-05-17T00:14:23.646791516Z" level=info msg="StopPodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\"" May 17 00:14:23.646940 containerd[2668]: time="2025-05-17T00:14:23.646925435Z" level=info msg="Ensure that sandbox b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4 in task-service has been cleanup successfully" May 17 00:14:23.649138 kubelet[4140]: I0517 00:14:23.649114 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:23.649248 containerd[2668]: time="2025-05-17T00:14:23.649199004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:14:23.649608 containerd[2668]: time="2025-05-17T00:14:23.649572478Z" level=info msg="StopPodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\"" May 17 00:14:23.649765 containerd[2668]: time="2025-05-17T00:14:23.649731316Z" level=info msg="Ensure that sandbox 12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f in task-service has been cleanup successfully" May 17 00:14:23.650244 containerd[2668]: time="2025-05-17T00:14:23.650213270Z" level=error msg="Failed to destroy network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.650791 containerd[2668]: time="2025-05-17T00:14:23.650761662Z" level=error msg="encountered an error cleaning up failed sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.650841 containerd[2668]: time="2025-05-17T00:14:23.650821221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2qg6j,Uid:ffe0c287-a727-4d33-b8b4-1067d384be58,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.650993 kubelet[4140]: E0517 00:14:23.650973 4140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.651029 kubelet[4140]: E0517 00:14:23.651011 4140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:23.651061 kubelet[4140]: E0517 00:14:23.651029 4140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2qg6j" May 17 00:14:23.651087 kubelet[4140]: E0517 00:14:23.651060 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2qg6j_calico-system(ffe0c287-a727-4d33-b8b4-1067d384be58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2qg6j_calico-system(ffe0c287-a727-4d33-b8b4-1067d384be58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2qg6j" podUID="ffe0c287-a727-4d33-b8b4-1067d384be58" May 17 00:14:23.652023 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f-shm.mount: Deactivated successfully. May 17 00:14:23.664481 containerd[2668]: time="2025-05-17T00:14:23.664436836Z" level=error msg="StopPodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" failed" error="failed to destroy network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.665033 kubelet[4140]: E0517 00:14:23.664649 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:23.665033 kubelet[4140]: E0517 00:14:23.664707 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4"} May 17 00:14:23.665033 kubelet[4140]: E0517 00:14:23.664762 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46e0c7e1-4d50-450c-9360-f91ac6e40f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.665033 kubelet[4140]: E0517 00:14:23.664782 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46e0c7e1-4d50-450c-9360-f91ac6e40f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-28zzb" podUID="46e0c7e1-4d50-450c-9360-f91ac6e40f72" May 17 00:14:23.665217 containerd[2668]: time="2025-05-17T00:14:23.664941909Z" level=error msg="StopPodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" failed" error="failed to destroy network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.665252 kubelet[4140]: E0517 00:14:23.665062 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:23.665252 kubelet[4140]: E0517 00:14:23.665100 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88"} May 17 00:14:23.665252 kubelet[4140]: E0517 00:14:23.665125 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92b8742a-1089-44d9-aa7e-9b92fa6f958f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.665252 kubelet[4140]: E0517 00:14:23.665145 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92b8742a-1089-44d9-aa7e-9b92fa6f958f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" podUID="92b8742a-1089-44d9-aa7e-9b92fa6f958f" May 17 00:14:23.665701 containerd[2668]: time="2025-05-17T00:14:23.665670499Z" level=error msg="StopPodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" failed" error="failed to destroy network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.665809 kubelet[4140]: E0517 00:14:23.665794 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:23.665835 kubelet[4140]: E0517 00:14:23.665812 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c"} May 17 00:14:23.665856 kubelet[4140]: E0517 00:14:23.665832 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c98c37c4-9226-427e-bd6e-d36379e173df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.665891 kubelet[4140]: E0517 00:14:23.665848 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c98c37c4-9226-427e-bd6e-d36379e173df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78fd6f9455-vxl45" podUID="c98c37c4-9226-427e-bd6e-d36379e173df" May 17 00:14:23.668400 containerd[2668]: time="2025-05-17T00:14:23.668368142Z" level=error msg="StopPodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" failed" error="failed to destroy network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.668501 kubelet[4140]: E0517 00:14:23.668479 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:23.668530 kubelet[4140]: E0517 00:14:23.668507 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed"} May 17 00:14:23.668551 kubelet[4140]: E0517 00:14:23.668528 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.668597 kubelet[4140]: E0517 00:14:23.668545 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" podUID="9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2" May 17 00:14:23.668908 containerd[2668]: time="2025-05-17T00:14:23.668880055Z" level=error msg="StopPodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" failed" error="failed to destroy network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.669022 kubelet[4140]: E0517 00:14:23.669006 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:23.669048 kubelet[4140]: E0517 00:14:23.669024 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e"} May 17 00:14:23.669048 kubelet[4140]: E0517 00:14:23.669043 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e476acd-d55f-474e-8792-7a4ffbe093e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.669100 kubelet[4140]: E0517 00:14:23.669058 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e476acd-d55f-474e-8792-7a4ffbe093e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" podUID="8e476acd-d55f-474e-8792-7a4ffbe093e3" May 17 00:14:23.669279 containerd[2668]: time="2025-05-17T00:14:23.669250850Z" level=error msg="StopPodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" failed" error="failed to destroy network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.669380 kubelet[4140]: E0517 00:14:23.669356 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:23.669406 kubelet[4140]: E0517 00:14:23.669389 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4"} May 17 00:14:23.669430 kubelet[4140]: E0517 00:14:23.669412 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e09e03c-e3f6-450c-916f-1c8c1d6def9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.669464 kubelet[4140]: E0517 00:14:23.669432 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e09e03c-e3f6-450c-916f-1c8c1d6def9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-st4g5" podUID="0e09e03c-e3f6-450c-916f-1c8c1d6def9e" May 17 00:14:23.671857 containerd[2668]: time="2025-05-17T00:14:23.671828735Z" level=error msg="StopPodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" failed" error="failed to destroy network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:23.671949 kubelet[4140]: E0517 00:14:23.671931 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:23.671978 kubelet[4140]: E0517 00:14:23.671952 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f"} May 17 00:14:23.671978 kubelet[4140]: E0517 00:14:23.671970 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c38caedc-750a-4223-9bb4-03b5231f627f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:23.672035 kubelet[4140]: E0517 00:14:23.671985 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c38caedc-750a-4223-9bb4-03b5231f627f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:14:24.651612 kubelet[4140]: I0517 00:14:24.651579 4140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:24.652099 containerd[2668]: time="2025-05-17T00:14:24.652043841Z" level=info msg="StopPodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\"" May 17 00:14:24.652263 containerd[2668]: time="2025-05-17T00:14:24.652235319Z" level=info msg="Ensure that sandbox 55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f in task-service has been cleanup successfully" May 17 00:14:24.672810 containerd[2668]: time="2025-05-17T00:14:24.672761177Z" level=error msg="StopPodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" failed" error="failed to destroy network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:14:24.672956 kubelet[4140]: E0517 00:14:24.672924 4140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:24.673007 kubelet[4140]: E0517 00:14:24.672963 4140 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f"} May 17 00:14:24.673007 kubelet[4140]: E0517 00:14:24.672993 4140 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffe0c287-a727-4d33-b8b4-1067d384be58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:14:24.673080 kubelet[4140]: E0517 00:14:24.673012 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffe0c287-a727-4d33-b8b4-1067d384be58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2qg6j" podUID="ffe0c287-a727-4d33-b8b4-1067d384be58" May 17 00:14:25.417037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695395073.mount: Deactivated successfully. May 17 00:14:25.433619 containerd[2668]: time="2025-05-17T00:14:25.433574394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:25.433725 containerd[2668]: time="2025-05-17T00:14:25.433585714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 17 00:14:25.434227 containerd[2668]: time="2025-05-17T00:14:25.434209267Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:25.435878 containerd[2668]: time="2025-05-17T00:14:25.435855207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:25.436528 containerd[2668]: time="2025-05-17T00:14:25.436503159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 1.787269156s" May 17 00:14:25.436566 containerd[2668]: time="2025-05-17T00:14:25.436534239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 00:14:25.441766 containerd[2668]: time="2025-05-17T00:14:25.441740537Z" level=info msg="CreateContainer within sandbox \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:14:25.461960 containerd[2668]: time="2025-05-17T00:14:25.461921975Z" level=info msg="CreateContainer within sandbox \"e781ce460bfbbf8f4167e5afcbca15b64d8135535a8badcd8369f621bc670bfd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66de6280097566d5fc3fcb68c79bb7cecdd92c9bb972d667c1f6fdfd6a68ac5f\"" May 17 00:14:25.462341 containerd[2668]: time="2025-05-17T00:14:25.462317650Z" level=info msg="StartContainer for \"66de6280097566d5fc3fcb68c79bb7cecdd92c9bb972d667c1f6fdfd6a68ac5f\"" May 17 00:14:25.488705 systemd[1]: Started cri-containerd-66de6280097566d5fc3fcb68c79bb7cecdd92c9bb972d667c1f6fdfd6a68ac5f.scope - libcontainer container 66de6280097566d5fc3fcb68c79bb7cecdd92c9bb972d667c1f6fdfd6a68ac5f. May 17 00:14:25.508703 containerd[2668]: time="2025-05-17T00:14:25.508670214Z" level=info msg="StartContainer for \"66de6280097566d5fc3fcb68c79bb7cecdd92c9bb972d667c1f6fdfd6a68ac5f\" returns successfully" May 17 00:14:25.639084 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:14:25.639182 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:14:25.665485 kubelet[4140]: I0517 00:14:25.665436 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-59q2p" podStartSLOduration=1.3379277649999999 podStartE2EDuration="6.665421015s" podCreationTimestamp="2025-05-17 00:14:19 +0000 UTC" firstStartedPulling="2025-05-17 00:14:20.109574503 +0000 UTC m=+22.577771719" lastFinishedPulling="2025-05-17 00:14:25.437067753 +0000 UTC m=+27.905264969" observedRunningTime="2025-05-17 00:14:25.665092019 +0000 UTC m=+28.133289235" watchObservedRunningTime="2025-05-17 00:14:25.665421015 +0000 UTC m=+28.133618231" May 17 00:14:25.695229 containerd[2668]: time="2025-05-17T00:14:25.695151299Z" level=info msg="StopPodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\"" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.734 [INFO][6119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.734 [INFO][6119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" iface="eth0" netns="/var/run/netns/cni-033e084c-f36b-47a1-203e-c1bc3ada3a2f" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.735 [INFO][6119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" iface="eth0" netns="/var/run/netns/cni-033e084c-f36b-47a1-203e-c1bc3ada3a2f" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.735 [INFO][6119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" iface="eth0" netns="/var/run/netns/cni-033e084c-f36b-47a1-203e-c1bc3ada3a2f" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.735 [INFO][6119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.735 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.769 [INFO][6153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.769 [INFO][6153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.769 [INFO][6153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.777 [WARNING][6153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.777 [INFO][6153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.778 [INFO][6153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:25.781382 containerd[2668]: 2025-05-17 00:14:25.779 [INFO][6119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:25.781749 containerd[2668]: time="2025-05-17T00:14:25.781532184Z" level=info msg="TearDown network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" successfully" May 17 00:14:25.781749 containerd[2668]: time="2025-05-17T00:14:25.781563223Z" level=info msg="StopPodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" returns successfully" May 17 00:14:25.866870 kubelet[4140]: I0517 00:14:25.866835 4140 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjqzx\" (UniqueName: \"kubernetes.io/projected/c98c37c4-9226-427e-bd6e-d36379e173df-kube-api-access-rjqzx\") pod \"c98c37c4-9226-427e-bd6e-d36379e173df\" (UID: \"c98c37c4-9226-427e-bd6e-d36379e173df\") " May 17 00:14:25.866932 kubelet[4140]: I0517 00:14:25.866882 4140 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-backend-key-pair\") pod \"c98c37c4-9226-427e-bd6e-d36379e173df\" (UID: \"c98c37c4-9226-427e-bd6e-d36379e173df\") " May 17 00:14:25.866932 kubelet[4140]: I0517 00:14:25.866901 4140 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-ca-bundle\") pod \"c98c37c4-9226-427e-bd6e-d36379e173df\" (UID: \"c98c37c4-9226-427e-bd6e-d36379e173df\") " May 17 00:14:25.867294 kubelet[4140]: I0517 00:14:25.867273 4140 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c98c37c4-9226-427e-bd6e-d36379e173df" (UID: "c98c37c4-9226-427e-bd6e-d36379e173df"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:14:25.869199 kubelet[4140]: I0517 00:14:25.869175 4140 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98c37c4-9226-427e-bd6e-d36379e173df-kube-api-access-rjqzx" (OuterVolumeSpecName: "kube-api-access-rjqzx") pod "c98c37c4-9226-427e-bd6e-d36379e173df" (UID: "c98c37c4-9226-427e-bd6e-d36379e173df"). InnerVolumeSpecName "kube-api-access-rjqzx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:14:25.869541 kubelet[4140]: I0517 00:14:25.869516 4140 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c98c37c4-9226-427e-bd6e-d36379e173df" (UID: "c98c37c4-9226-427e-bd6e-d36379e173df"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:14:25.967889 kubelet[4140]: I0517 00:14:25.967826 4140 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjqzx\" (UniqueName: \"kubernetes.io/projected/c98c37c4-9226-427e-bd6e-d36379e173df-kube-api-access-rjqzx\") on node \"ci-4081.3.3-n-3bfd76e738\" DevicePath \"\"" May 17 00:14:25.967889 kubelet[4140]: I0517 00:14:25.967843 4140 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-3bfd76e738\" DevicePath \"\"" May 17 00:14:25.967889 kubelet[4140]: I0517 00:14:25.967854 4140 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c98c37c4-9226-427e-bd6e-d36379e173df-whisker-ca-bundle\") on node \"ci-4081.3.3-n-3bfd76e738\" DevicePath \"\"" May 17 00:14:26.418045 systemd[1]: run-netns-cni\x2d033e084c\x2df36b\x2d47a1\x2d203e\x2dc1bc3ada3a2f.mount: Deactivated successfully. May 17 00:14:26.418122 systemd[1]: var-lib-kubelet-pods-c98c37c4\x2d9226\x2d427e\x2dbd6e\x2dd36379e173df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjqzx.mount: Deactivated successfully. May 17 00:14:26.418180 systemd[1]: var-lib-kubelet-pods-c98c37c4\x2d9226\x2d427e\x2dbd6e\x2dd36379e173df-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:14:26.655841 kubelet[4140]: I0517 00:14:26.655817 4140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:14:26.659361 systemd[1]: Removed slice kubepods-besteffort-podc98c37c4_9226_427e_bd6e_d36379e173df.slice - libcontainer container kubepods-besteffort-podc98c37c4_9226_427e_bd6e_d36379e173df.slice. May 17 00:14:26.691938 systemd[1]: Created slice kubepods-besteffort-podfe4f1f40_6407_495f_8d43_6209e12b2a8a.slice - libcontainer container kubepods-besteffort-podfe4f1f40_6407_495f_8d43_6209e12b2a8a.slice. May 17 00:14:26.772572 kubelet[4140]: I0517 00:14:26.772538 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4f1f40-6407-495f-8d43-6209e12b2a8a-whisker-backend-key-pair\") pod \"whisker-7c95f6579-26ngv\" (UID: \"fe4f1f40-6407-495f-8d43-6209e12b2a8a\") " pod="calico-system/whisker-7c95f6579-26ngv" May 17 00:14:26.772907 kubelet[4140]: I0517 00:14:26.772607 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe4f1f40-6407-495f-8d43-6209e12b2a8a-whisker-ca-bundle\") pod \"whisker-7c95f6579-26ngv\" (UID: \"fe4f1f40-6407-495f-8d43-6209e12b2a8a\") " pod="calico-system/whisker-7c95f6579-26ngv" May 17 00:14:26.772907 kubelet[4140]: I0517 00:14:26.772657 4140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b62v2\" (UniqueName: \"kubernetes.io/projected/fe4f1f40-6407-495f-8d43-6209e12b2a8a-kube-api-access-b62v2\") pod \"whisker-7c95f6579-26ngv\" (UID: \"fe4f1f40-6407-495f-8d43-6209e12b2a8a\") " pod="calico-system/whisker-7c95f6579-26ngv" May 17 00:14:26.994372 containerd[2668]: time="2025-05-17T00:14:26.994252031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c95f6579-26ngv,Uid:fe4f1f40-6407-495f-8d43-6209e12b2a8a,Namespace:calico-system,Attempt:0,}" May 17 00:14:27.078159 systemd-networkd[2569]: cali91d82aaa6d2: Link UP May 17 00:14:27.078360 systemd-networkd[2569]: cali91d82aaa6d2: Gained carrier May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.015 [INFO][6331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.027 [INFO][6331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0 whisker-7c95f6579- calico-system fe4f1f40-6407-495f-8d43-6209e12b2a8a 846 0 2025-05-17 00:14:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c95f6579 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 whisker-7c95f6579-26ngv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali91d82aaa6d2 [] [] }} ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.027 [INFO][6331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.046 [INFO][6358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" HandleID="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.046 [INFO][6358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" HandleID="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367e50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"whisker-7c95f6579-26ngv", "timestamp":"2025-05-17 00:14:27.046487196 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.047 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.047 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.047 [INFO][6358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.055 [INFO][6358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.058 [INFO][6358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.061 [INFO][6358] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.063 [INFO][6358] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.064 [INFO][6358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.064 [INFO][6358] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.065 [INFO][6358] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45 May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.067 [INFO][6358] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.071 [INFO][6358] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.1/26] block=192.168.18.0/26 handle="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.071 [INFO][6358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.1/26] handle="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.071 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:27.086215 containerd[2668]: 2025-05-17 00:14:27.071 [INFO][6358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.1/26] IPv6=[] ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" HandleID="k8s-pod-network.1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.086674 containerd[2668]: 2025-05-17 00:14:27.072 [INFO][6331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0", GenerateName:"whisker-7c95f6579-", Namespace:"calico-system", SelfLink:"", UID:"fe4f1f40-6407-495f-8d43-6209e12b2a8a", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c95f6579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"whisker-7c95f6579-26ngv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91d82aaa6d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:27.086674 containerd[2668]: 2025-05-17 00:14:27.073 [INFO][6331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.1/32] ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.086674 containerd[2668]: 2025-05-17 00:14:27.073 [INFO][6331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91d82aaa6d2 ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.086674 containerd[2668]: 2025-05-17 00:14:27.079 [INFO][6331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.086674 containerd[2668]: 2025-05-17 00:14:27.079 [INFO][6331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0", GenerateName:"whisker-7c95f6579-", Namespace:"calico-system", SelfLink:"", UID:"fe4f1f40-6407-495f-8d43-6209e12b2a8a", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c95f6579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45", Pod:"whisker-7c95f6579-26ngv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91d82aaa6d2", MAC:"9a:e0:63:0f:96:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:27.086674 containerd[2668]: 2025-05-17 00:14:27.084 [INFO][6331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45" Namespace="calico-system" Pod="whisker-7c95f6579-26ngv" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--7c95f6579--26ngv-eth0" May 17 00:14:27.098354 containerd[2668]: time="2025-05-17T00:14:27.098004893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:27.098390 containerd[2668]: time="2025-05-17T00:14:27.098374769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:27.098411 containerd[2668]: time="2025-05-17T00:14:27.098387129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:27.098507 containerd[2668]: time="2025-05-17T00:14:27.098490288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:27.119769 systemd[1]: Started cri-containerd-1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45.scope - libcontainer container 1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45. May 17 00:14:27.143754 containerd[2668]: time="2025-05-17T00:14:27.143721052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c95f6579-26ngv,Uid:fe4f1f40-6407-495f-8d43-6209e12b2a8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d8144cfc07b3cf4239e944f4c5cbcc01575ae38d69e5eba8bb70cb44e949d45\"" May 17 00:14:27.144785 containerd[2668]: time="2025-05-17T00:14:27.144765041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:14:27.168971 containerd[2668]: time="2025-05-17T00:14:27.168926226Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:27.169222 containerd[2668]: time="2025-05-17T00:14:27.169195943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:27.169280 containerd[2668]: time="2025-05-17T00:14:27.169257343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:14:27.169413 kubelet[4140]: E0517 00:14:27.169364 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:14:27.169475 kubelet[4140]: E0517 00:14:27.169427 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:14:27.169577 kubelet[4140]: E0517 00:14:27.169544 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7cef1fccdb484787957099af255476,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:27.171186 containerd[2668]: time="2025-05-17T00:14:27.171164602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:14:27.195234 containerd[2668]: time="2025-05-17T00:14:27.195196749Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:27.195489 containerd[2668]: time="2025-05-17T00:14:27.195457107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:27.195542 containerd[2668]: time="2025-05-17T00:14:27.195523866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:14:27.195665 kubelet[4140]: E0517 00:14:27.195624 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:14:27.195711 kubelet[4140]: E0517 00:14:27.195674 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:14:27.195851 kubelet[4140]: E0517 00:14:27.195789 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:27.197680 kubelet[4140]: E0517 00:14:27.197645 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:14:27.602620 kubelet[4140]: I0517 00:14:27.602584 4140 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98c37c4-9226-427e-bd6e-d36379e173df" path="/var/lib/kubelet/pods/c98c37c4-9226-427e-bd6e-d36379e173df/volumes" May 17 00:14:27.658293 kubelet[4140]: E0517 00:14:27.658259 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:14:27.703870 kubelet[4140]: I0517 00:14:27.703840 4140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:14:28.029612 kernel: bpftool[6552]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:14:28.190675 systemd-networkd[2569]: vxlan.calico: Link UP May 17 00:14:28.190680 systemd-networkd[2569]: vxlan.calico: Gained carrier May 17 00:14:28.660293 kubelet[4140]: E0517 00:14:28.660258 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:14:28.743699 systemd-networkd[2569]: cali91d82aaa6d2: Gained IPv6LL May 17 00:14:29.895689 systemd-networkd[2569]: vxlan.calico: Gained IPv6LL May 17 00:14:34.977404 kubelet[4140]: I0517 00:14:34.977336 4140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:14:36.601059 containerd[2668]: time="2025-05-17T00:14:36.601005565Z" level=info msg="StopPodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\"" May 17 00:14:36.601500 containerd[2668]: time="2025-05-17T00:14:36.601030805Z" level=info msg="StopPodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\"" May 17 00:14:36.601500 containerd[2668]: time="2025-05-17T00:14:36.601034285Z" level=info msg="StopPodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\"" May 17 00:14:36.601500 containerd[2668]: time="2025-05-17T00:14:36.601031285Z" level=info msg="StopPodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\"" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" iface="eth0" netns="/var/run/netns/cni-a150d909-3338-b68a-b030-66f74dd4d352" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" iface="eth0" netns="/var/run/netns/cni-a150d909-3338-b68a-b030-66f74dd4d352" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" iface="eth0" netns="/var/run/netns/cni-a150d909-3338-b68a-b030-66f74dd4d352" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.653 [INFO][6999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][6999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][6999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.661 [WARNING][6999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.661 [INFO][6999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.662 [INFO][6999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.664675 containerd[2668]: 2025-05-17 00:14:36.663 [INFO][6915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:36.665030 containerd[2668]: time="2025-05-17T00:14:36.664838505Z" level=info msg="TearDown network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" successfully" May 17 00:14:36.665030 containerd[2668]: time="2025-05-17T00:14:36.664862785Z" level=info msg="StopPodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" returns successfully" May 17 00:14:36.665441 containerd[2668]: time="2025-05-17T00:14:36.665418142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-spxxt,Uid:8e476acd-d55f-474e-8792-7a4ffbe093e3,Namespace:calico-apiserver,Attempt:1,}" May 17 00:14:36.666788 systemd[1]: run-netns-cni\x2da150d909\x2d3338\x2db68a\x2db030\x2d66f74dd4d352.mount: Deactivated successfully. May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6913] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" iface="eth0" netns="/var/run/netns/cni-4e97132a-8596-857c-57f9-b86c64523d42" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6913] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" iface="eth0" netns="/var/run/netns/cni-4e97132a-8596-857c-57f9-b86c64523d42" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6913] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" iface="eth0" netns="/var/run/netns/cni-4e97132a-8596-857c-57f9-b86c64523d42" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.653 [INFO][6997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][6997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.662 [INFO][6997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.669 [WARNING][6997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.669 [INFO][6997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.670 [INFO][6997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.673660 containerd[2668]: 2025-05-17 00:14:36.672 [INFO][6913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:36.673941 containerd[2668]: time="2025-05-17T00:14:36.673786383Z" level=info msg="TearDown network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" successfully" May 17 00:14:36.673941 containerd[2668]: time="2025-05-17T00:14:36.673808343Z" level=info msg="StopPodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" returns successfully" May 17 00:14:36.674227 containerd[2668]: time="2025-05-17T00:14:36.674204261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-7jvwm,Uid:9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2,Namespace:calico-apiserver,Attempt:1,}" May 17 00:14:36.675406 systemd[1]: run-netns-cni\x2d4e97132a\x2d8596\x2d857c\x2d57f9\x2db86c64523d42.mount: Deactivated successfully. May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" iface="eth0" netns="/var/run/netns/cni-087a251a-f41a-9b5b-aaf0-f1a83892a972" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" iface="eth0" netns="/var/run/netns/cni-087a251a-f41a-9b5b-aaf0-f1a83892a972" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" iface="eth0" netns="/var/run/netns/cni-087a251a-f41a-9b5b-aaf0-f1a83892a972" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][7003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][7003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.670 [INFO][7003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.677 [WARNING][7003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.677 [INFO][7003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.678 [INFO][7003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.682218 containerd[2668]: 2025-05-17 00:14:36.680 [INFO][6916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:36.682621 containerd[2668]: time="2025-05-17T00:14:36.682341822Z" level=info msg="TearDown network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" successfully" May 17 00:14:36.682621 containerd[2668]: time="2025-05-17T00:14:36.682363502Z" level=info msg="StopPodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" returns successfully" May 17 00:14:36.682826 containerd[2668]: time="2025-05-17T00:14:36.682802900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28zzb,Uid:46e0c7e1-4d50-450c-9360-f91ac6e40f72,Namespace:kube-system,Attempt:1,}" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.636 [INFO][6914] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6914] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" iface="eth0" netns="/var/run/netns/cni-a3da4538-e4b7-e234-a3f2-608e003bc0c7" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6914] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" iface="eth0" netns="/var/run/netns/cni-a3da4538-e4b7-e234-a3f2-608e003bc0c7" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6914] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" iface="eth0" netns="/var/run/netns/cni-a3da4538-e4b7-e234-a3f2-608e003bc0c7" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6914] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.637 [INFO][6914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][7002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.654 [INFO][7002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.678 [INFO][7002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.686 [WARNING][7002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.686 [INFO][7002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.687 [INFO][7002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.690180 containerd[2668]: 2025-05-17 00:14:36.688 [INFO][6914] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:36.690504 containerd[2668]: time="2025-05-17T00:14:36.690328745Z" level=info msg="TearDown network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" successfully" May 17 00:14:36.690504 containerd[2668]: time="2025-05-17T00:14:36.690358945Z" level=info msg="StopPodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" returns successfully" May 17 00:14:36.690823 containerd[2668]: time="2025-05-17T00:14:36.690800303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-st4g5,Uid:0e09e03c-e3f6-450c-916f-1c8c1d6def9e,Namespace:kube-system,Attempt:1,}" May 17 00:14:36.747989 systemd-networkd[2569]: calif03fb2c83d4: Link UP May 17 00:14:36.748206 systemd-networkd[2569]: calif03fb2c83d4: Gained carrier May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.698 [INFO][7082] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0 calico-apiserver-84f7cc55c9- calico-apiserver 8e476acd-d55f-474e-8792-7a4ffbe093e3 903 0 2025-05-17 00:14:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84f7cc55c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 calico-apiserver-84f7cc55c9-spxxt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif03fb2c83d4 [] [] }} ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.698 [INFO][7082] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.719 [INFO][7162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" HandleID="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.719 [INFO][7162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" HandleID="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000366af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"calico-apiserver-84f7cc55c9-spxxt", "timestamp":"2025-05-17 00:14:36.719746086 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.719 [INFO][7162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.719 [INFO][7162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.719 [INFO][7162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.727 [INFO][7162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.731 [INFO][7162] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.734 [INFO][7162] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.736 [INFO][7162] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.737 [INFO][7162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.737 [INFO][7162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.738 [INFO][7162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.741 [INFO][7162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.744 [INFO][7162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.2/26] block=192.168.18.0/26 handle="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.744 [INFO][7162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.2/26] handle="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.744 [INFO][7162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.756870 containerd[2668]: 2025-05-17 00:14:36.745 [INFO][7162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.2/26] IPv6=[] ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" HandleID="k8s-pod-network.555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.757349 containerd[2668]: 2025-05-17 00:14:36.746 [INFO][7082] cni-plugin/k8s.go 418: Populated endpoint ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e476acd-d55f-474e-8792-7a4ffbe093e3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"calico-apiserver-84f7cc55c9-spxxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif03fb2c83d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:36.757349 containerd[2668]: 2025-05-17 00:14:36.746 [INFO][7082] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.2/32] ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.757349 containerd[2668]: 2025-05-17 00:14:36.746 [INFO][7082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif03fb2c83d4 ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.757349 containerd[2668]: 2025-05-17 00:14:36.748 [INFO][7082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.757349 containerd[2668]: 2025-05-17 00:14:36.749 [INFO][7082] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e476acd-d55f-474e-8792-7a4ffbe093e3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d", Pod:"calico-apiserver-84f7cc55c9-spxxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif03fb2c83d4", MAC:"be:6d:5c:25:28:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:36.757349 containerd[2668]: 2025-05-17 00:14:36.755 [INFO][7082] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-spxxt" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:36.772196 containerd[2668]: time="2025-05-17T00:14:36.772108360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:36.772500 containerd[2668]: time="2025-05-17T00:14:36.772179279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:36.772542 containerd[2668]: time="2025-05-17T00:14:36.772502838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:36.772652 containerd[2668]: time="2025-05-17T00:14:36.772636117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:36.797785 systemd[1]: Started cri-containerd-555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d.scope - libcontainer container 555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d. May 17 00:14:36.821058 containerd[2668]: time="2025-05-17T00:14:36.821032209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-spxxt,Uid:8e476acd-d55f-474e-8792-7a4ffbe093e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d\"" May 17 00:14:36.824208 containerd[2668]: time="2025-05-17T00:14:36.824183595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:14:36.847249 systemd-networkd[2569]: cali829f9ce8f67: Link UP May 17 00:14:36.848681 systemd-networkd[2569]: cali829f9ce8f67: Gained carrier May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.704 [INFO][7098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0 calico-apiserver-84f7cc55c9- calico-apiserver 9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2 902 0 2025-05-17 00:14:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84f7cc55c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 calico-apiserver-84f7cc55c9-7jvwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali829f9ce8f67 [] [] }} ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.704 [INFO][7098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.728 [INFO][7193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" HandleID="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.728 [INFO][7193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" HandleID="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6c90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"calico-apiserver-84f7cc55c9-7jvwm", "timestamp":"2025-05-17 00:14:36.728365046 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.728 [INFO][7193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.744 [INFO][7193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.745 [INFO][7193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.828 [INFO][7193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.831 [INFO][7193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.834 [INFO][7193] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.835 [INFO][7193] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.837 [INFO][7193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.837 [INFO][7193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.838 [INFO][7193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.840 [INFO][7193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.844 [INFO][7193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.3/26] block=192.168.18.0/26 handle="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.844 [INFO][7193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.3/26] handle="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.844 [INFO][7193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.855838 containerd[2668]: 2025-05-17 00:14:36.844 [INFO][7193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.3/26] IPv6=[] ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" HandleID="k8s-pod-network.96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.856302 containerd[2668]: 2025-05-17 00:14:36.846 [INFO][7098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"calico-apiserver-84f7cc55c9-7jvwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali829f9ce8f67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:36.856302 containerd[2668]: 2025-05-17 00:14:36.846 [INFO][7098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.3/32] ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.856302 containerd[2668]: 2025-05-17 00:14:36.846 [INFO][7098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali829f9ce8f67 ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.856302 containerd[2668]: 2025-05-17 00:14:36.848 [INFO][7098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.856302 containerd[2668]: 2025-05-17 00:14:36.848 [INFO][7098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e", Pod:"calico-apiserver-84f7cc55c9-7jvwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali829f9ce8f67", MAC:"c2:74:29:98:f0:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:36.856302 containerd[2668]: 2025-05-17 00:14:36.854 [INFO][7098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e" Namespace="calico-apiserver" Pod="calico-apiserver-84f7cc55c9-7jvwm" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:36.871326 containerd[2668]: time="2025-05-17T00:14:36.871263053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:36.871326 containerd[2668]: time="2025-05-17T00:14:36.871316333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:36.871390 containerd[2668]: time="2025-05-17T00:14:36.871328613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:36.871417 containerd[2668]: time="2025-05-17T00:14:36.871400092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:36.893714 systemd[1]: Started cri-containerd-96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e.scope - libcontainer container 96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e. May 17 00:14:36.916996 containerd[2668]: time="2025-05-17T00:14:36.916963518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f7cc55c9-7jvwm,Uid:9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e\"" May 17 00:14:36.961987 systemd-networkd[2569]: calic28700f392d: Link UP May 17 00:14:36.962348 systemd-networkd[2569]: calic28700f392d: Gained carrier May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.713 [INFO][7123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0 coredns-7c65d6cfc9- kube-system 46e0c7e1-4d50-450c-9360-f91ac6e40f72 904 0 2025-05-17 00:14:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 coredns-7c65d6cfc9-28zzb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic28700f392d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.713 [INFO][7123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.733 [INFO][7204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" HandleID="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.733 [INFO][7204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" HandleID="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006ac4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"coredns-7c65d6cfc9-28zzb", "timestamp":"2025-05-17 00:14:36.733596301 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.733 [INFO][7204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.844 [INFO][7204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.844 [INFO][7204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.928 [INFO][7204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.941 [INFO][7204] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.948 [INFO][7204] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.950 [INFO][7204] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.951 [INFO][7204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.951 [INFO][7204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.952 [INFO][7204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456 May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.955 [INFO][7204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.959 [INFO][7204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.4/26] block=192.168.18.0/26 handle="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.959 [INFO][7204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.4/26] handle="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.959 [INFO][7204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:36.971059 containerd[2668]: 2025-05-17 00:14:36.959 [INFO][7204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.4/26] IPv6=[] ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" HandleID="k8s-pod-network.fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.971525 containerd[2668]: 2025-05-17 00:14:36.960 [INFO][7123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"46e0c7e1-4d50-450c-9360-f91ac6e40f72", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"coredns-7c65d6cfc9-28zzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28700f392d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:36.971525 containerd[2668]: 2025-05-17 00:14:36.960 [INFO][7123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.4/32] ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.971525 containerd[2668]: 2025-05-17 00:14:36.960 [INFO][7123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic28700f392d ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.971525 containerd[2668]: 2025-05-17 00:14:36.962 [INFO][7123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.971525 containerd[2668]: 2025-05-17 00:14:36.962 [INFO][7123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"46e0c7e1-4d50-450c-9360-f91ac6e40f72", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456", Pod:"coredns-7c65d6cfc9-28zzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28700f392d", MAC:"c6:9f:a0:18:bb:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:36.971525 containerd[2668]: 2025-05-17 00:14:36.969 [INFO][7123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456" Namespace="kube-system" Pod="coredns-7c65d6cfc9-28zzb" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:36.989745 containerd[2668]: time="2025-05-17T00:14:36.989356417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:36.989745 containerd[2668]: time="2025-05-17T00:14:36.989737575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:36.989806 containerd[2668]: time="2025-05-17T00:14:36.989751735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:36.989899 containerd[2668]: time="2025-05-17T00:14:36.989880374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.013776 systemd[1]: Started cri-containerd-fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456.scope - libcontainer container fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456. May 17 00:14:37.036720 containerd[2668]: time="2025-05-17T00:14:37.036685694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-28zzb,Uid:46e0c7e1-4d50-450c-9360-f91ac6e40f72,Namespace:kube-system,Attempt:1,} returns sandbox id \"fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456\"" May 17 00:14:37.038446 containerd[2668]: time="2025-05-17T00:14:37.038424032Z" level=info msg="CreateContainer within sandbox \"fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:14:37.043458 containerd[2668]: time="2025-05-17T00:14:37.043428646Z" level=info msg="CreateContainer within sandbox \"fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"176936d86932bf683d99897be6e9693d1b677337a1e599ed1ff9ff478fcf08d8\"" May 17 00:14:37.043814 containerd[2668]: time="2025-05-17T00:14:37.043790010Z" level=info msg="StartContainer for \"176936d86932bf683d99897be6e9693d1b677337a1e599ed1ff9ff478fcf08d8\"" May 17 00:14:37.062883 systemd-networkd[2569]: calie9ee66ca7c3: Link UP May 17 00:14:37.063293 systemd-networkd[2569]: calie9ee66ca7c3: Gained carrier May 17 00:14:37.068721 systemd[1]: Started cri-containerd-176936d86932bf683d99897be6e9693d1b677337a1e599ed1ff9ff478fcf08d8.scope - libcontainer container 176936d86932bf683d99897be6e9693d1b677337a1e599ed1ff9ff478fcf08d8. May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.722 [INFO][7149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0 coredns-7c65d6cfc9- kube-system 0e09e03c-e3f6-450c-916f-1c8c1d6def9e 905 0 2025-05-17 00:14:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 coredns-7c65d6cfc9-st4g5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9ee66ca7c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.722 [INFO][7149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.743 [INFO][7243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" HandleID="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.743 [INFO][7243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" HandleID="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400071c790), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"coredns-7c65d6cfc9-st4g5", "timestamp":"2025-05-17 00:14:36.743452455 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.743 [INFO][7243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.959 [INFO][7243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:36.959 [INFO][7243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.029 [INFO][7243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.039 [INFO][7243] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.049 [INFO][7243] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.050 [INFO][7243] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.052 [INFO][7243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.052 [INFO][7243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.053 [INFO][7243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104 May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.055 [INFO][7243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.059 [INFO][7243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.5/26] block=192.168.18.0/26 handle="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.059 [INFO][7243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.5/26] handle="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.059 [INFO][7243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:37.071386 containerd[2668]: 2025-05-17 00:14:37.060 [INFO][7243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.5/26] IPv6=[] ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" HandleID="k8s-pod-network.629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.071811 containerd[2668]: 2025-05-17 00:14:37.061 [INFO][7149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0e09e03c-e3f6-450c-916f-1c8c1d6def9e", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"coredns-7c65d6cfc9-st4g5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ee66ca7c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:37.071811 containerd[2668]: 2025-05-17 00:14:37.061 [INFO][7149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.5/32] ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.071811 containerd[2668]: 2025-05-17 00:14:37.061 [INFO][7149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9ee66ca7c3 ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.071811 containerd[2668]: 2025-05-17 00:14:37.063 [INFO][7149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.071811 containerd[2668]: 2025-05-17 00:14:37.063 [INFO][7149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0e09e03c-e3f6-450c-916f-1c8c1d6def9e", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104", Pod:"coredns-7c65d6cfc9-st4g5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ee66ca7c3", MAC:"32:c9:56:51:da:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:37.071811 containerd[2668]: 2025-05-17 00:14:37.069 [INFO][7149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104" Namespace="kube-system" Pod="coredns-7c65d6cfc9-st4g5" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:37.083983 containerd[2668]: time="2025-05-17T00:14:37.083924158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:37.084014 containerd[2668]: time="2025-05-17T00:14:37.083977158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:37.084014 containerd[2668]: time="2025-05-17T00:14:37.083989839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.084081 containerd[2668]: time="2025-05-17T00:14:37.084066199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.085903 containerd[2668]: time="2025-05-17T00:14:37.085877099Z" level=info msg="StartContainer for \"176936d86932bf683d99897be6e9693d1b677337a1e599ed1ff9ff478fcf08d8\" returns successfully" May 17 00:14:37.115735 systemd[1]: Started cri-containerd-629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104.scope - libcontainer container 629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104. May 17 00:14:37.140190 containerd[2668]: time="2025-05-17T00:14:37.140161318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-st4g5,Uid:0e09e03c-e3f6-450c-916f-1c8c1d6def9e,Namespace:kube-system,Attempt:1,} returns sandbox id \"629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104\"" May 17 00:14:37.142060 containerd[2668]: time="2025-05-17T00:14:37.142037098Z" level=info msg="CreateContainer within sandbox \"629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:14:37.147760 containerd[2668]: time="2025-05-17T00:14:37.147728039Z" level=info msg="CreateContainer within sandbox \"629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b5acb3aed09887935de4d4bab92601cd2040c99b8f9813fdb8938159ee75887\"" May 17 00:14:37.148115 containerd[2668]: time="2025-05-17T00:14:37.148088243Z" level=info msg="StartContainer for \"7b5acb3aed09887935de4d4bab92601cd2040c99b8f9813fdb8938159ee75887\"" May 17 00:14:37.179801 systemd[1]: Started cri-containerd-7b5acb3aed09887935de4d4bab92601cd2040c99b8f9813fdb8938159ee75887.scope - libcontainer container 7b5acb3aed09887935de4d4bab92601cd2040c99b8f9813fdb8938159ee75887. May 17 00:14:37.197777 containerd[2668]: time="2025-05-17T00:14:37.197744092Z" level=info msg="StartContainer for \"7b5acb3aed09887935de4d4bab92601cd2040c99b8f9813fdb8938159ee75887\" returns successfully" May 17 00:14:37.601369 containerd[2668]: time="2025-05-17T00:14:37.601031595Z" level=info msg="StopPodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\"" May 17 00:14:37.601369 containerd[2668]: time="2025-05-17T00:14:37.601095956Z" level=info msg="StopPodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\"" May 17 00:14:37.673270 systemd[1]: run-netns-cni\x2d087a251a\x2df41a\x2d9b5b\x2daaf0\x2df1a83892a972.mount: Deactivated successfully. May 17 00:14:37.673351 systemd[1]: run-netns-cni\x2da3da4538\x2de4b7\x2de234\x2da3f2\x2d608e003bc0c7.mount: Deactivated successfully. May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.644 [INFO][7642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.644 [INFO][7642] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" iface="eth0" netns="/var/run/netns/cni-0d218198-837a-a5bf-221b-ad38d2e24a37" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.645 [INFO][7642] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" iface="eth0" netns="/var/run/netns/cni-0d218198-837a-a5bf-221b-ad38d2e24a37" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.645 [INFO][7642] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" iface="eth0" netns="/var/run/netns/cni-0d218198-837a-a5bf-221b-ad38d2e24a37" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.645 [INFO][7642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.645 [INFO][7642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.662 [INFO][7679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.662 [INFO][7679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.662 [INFO][7679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.670 [WARNING][7679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.670 [INFO][7679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.671 [INFO][7679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:37.678129 containerd[2668]: 2025-05-17 00:14:37.672 [INFO][7642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:37.678406 containerd[2668]: time="2025-05-17T00:14:37.678305260Z" level=info msg="TearDown network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" successfully" May 17 00:14:37.678406 containerd[2668]: time="2025-05-17T00:14:37.678331140Z" level=info msg="StopPodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" returns successfully" May 17 00:14:37.678951 containerd[2668]: time="2025-05-17T00:14:37.678761385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2qg6j,Uid:ffe0c287-a727-4d33-b8b4-1067d384be58,Namespace:calico-system,Attempt:1,}" May 17 00:14:37.680155 systemd[1]: run-netns-cni\x2d0d218198\x2d837a\x2da5bf\x2d221b\x2dad38d2e24a37.mount: Deactivated successfully. May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.647 [INFO][7643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.647 [INFO][7643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" iface="eth0" netns="/var/run/netns/cni-c7d0d79e-e239-e137-7f62-9072b6299a83" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.647 [INFO][7643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" iface="eth0" netns="/var/run/netns/cni-c7d0d79e-e239-e137-7f62-9072b6299a83" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.647 [INFO][7643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" iface="eth0" netns="/var/run/netns/cni-c7d0d79e-e239-e137-7f62-9072b6299a83" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.647 [INFO][7643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.647 [INFO][7643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.664 [INFO][7685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.664 [INFO][7685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.671 [INFO][7685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.679 [WARNING][7685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.679 [INFO][7685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.680 [INFO][7685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:37.683622 containerd[2668]: 2025-05-17 00:14:37.682 [INFO][7643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:37.684105 containerd[2668]: time="2025-05-17T00:14:37.684015881Z" level=info msg="TearDown network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" successfully" May 17 00:14:37.684105 containerd[2668]: time="2025-05-17T00:14:37.684041121Z" level=info msg="StopPodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" returns successfully" May 17 00:14:37.684495 containerd[2668]: time="2025-05-17T00:14:37.684470206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57674c48cd-bjqp6,Uid:92b8742a-1089-44d9-aa7e-9b92fa6f958f,Namespace:calico-system,Attempt:1,}" May 17 00:14:37.685270 kubelet[4140]: I0517 00:14:37.685223 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-st4g5" podStartSLOduration=32.685203494 podStartE2EDuration="32.685203494s" podCreationTimestamp="2025-05-17 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:37.684971731 +0000 UTC m=+40.153168947" watchObservedRunningTime="2025-05-17 00:14:37.685203494 +0000 UTC m=+40.153400710" May 17 00:14:37.686644 systemd[1]: run-netns-cni\x2dc7d0d79e\x2de239\x2de137\x2d7f62\x2d9072b6299a83.mount: Deactivated successfully. May 17 00:14:37.700237 kubelet[4140]: I0517 00:14:37.700187 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-28zzb" podStartSLOduration=32.700169653 podStartE2EDuration="32.700169653s" podCreationTimestamp="2025-05-17 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:37.700136853 +0000 UTC m=+40.168334029" watchObservedRunningTime="2025-05-17 00:14:37.700169653 +0000 UTC m=+40.168366869" May 17 00:14:37.717864 containerd[2668]: time="2025-05-17T00:14:37.717826002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:37.718060 containerd[2668]: time="2025-05-17T00:14:37.717829722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 17 00:14:37.718532 containerd[2668]: time="2025-05-17T00:14:37.718513649Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:37.720354 containerd[2668]: time="2025-05-17T00:14:37.720335948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:37.721073 containerd[2668]: time="2025-05-17T00:14:37.721042316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 896.821122ms" May 17 00:14:37.721105 containerd[2668]: time="2025-05-17T00:14:37.721078916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:14:37.721899 containerd[2668]: time="2025-05-17T00:14:37.721881925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:14:37.722793 containerd[2668]: time="2025-05-17T00:14:37.722769254Z" level=info msg="CreateContainer within sandbox \"555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:14:37.727606 containerd[2668]: time="2025-05-17T00:14:37.727570746Z" level=info msg="CreateContainer within sandbox \"555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"440ce0a8e164d3a64608b7efccb9f768866d152e6eec974d07aa147613506025\"" May 17 00:14:37.727950 containerd[2668]: time="2025-05-17T00:14:37.727928069Z" level=info msg="StartContainer for \"440ce0a8e164d3a64608b7efccb9f768866d152e6eec974d07aa147613506025\"" May 17 00:14:37.762019 systemd-networkd[2569]: cali102b27fda64: Link UP May 17 00:14:37.762237 systemd-networkd[2569]: cali102b27fda64: Gained carrier May 17 00:14:37.764754 systemd[1]: Started cri-containerd-440ce0a8e164d3a64608b7efccb9f768866d152e6eec974d07aa147613506025.scope - libcontainer container 440ce0a8e164d3a64608b7efccb9f768866d152e6eec974d07aa147613506025. May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.713 [INFO][7720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0 csi-node-driver- calico-system ffe0c287-a727-4d33-b8b4-1067d384be58 932 0 2025-05-17 00:14:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 csi-node-driver-2qg6j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali102b27fda64 [] [] }} ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.713 [INFO][7720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.733 [INFO][7778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" HandleID="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.733 [INFO][7778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" HandleID="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cfc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"csi-node-driver-2qg6j", "timestamp":"2025-05-17 00:14:37.73364045 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.733 [INFO][7778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.733 [INFO][7778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.733 [INFO][7778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.741 [INFO][7778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.745 [INFO][7778] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.748 [INFO][7778] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.749 [INFO][7778] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.751 [INFO][7778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.751 [INFO][7778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.752 [INFO][7778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51 May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.754 [INFO][7778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.758 [INFO][7778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.6/26] block=192.168.18.0/26 handle="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.759 [INFO][7778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.6/26] handle="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.759 [INFO][7778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:37.770474 containerd[2668]: 2025-05-17 00:14:37.759 [INFO][7778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.6/26] IPv6=[] ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" HandleID="k8s-pod-network.7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.770904 containerd[2668]: 2025-05-17 00:14:37.760 [INFO][7720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffe0c287-a727-4d33-b8b4-1067d384be58", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"csi-node-driver-2qg6j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali102b27fda64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:37.770904 containerd[2668]: 2025-05-17 00:14:37.760 [INFO][7720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.6/32] ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.770904 containerd[2668]: 2025-05-17 00:14:37.760 [INFO][7720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali102b27fda64 ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.770904 containerd[2668]: 2025-05-17 00:14:37.762 [INFO][7720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.770904 containerd[2668]: 2025-05-17 00:14:37.763 [INFO][7720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffe0c287-a727-4d33-b8b4-1067d384be58", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51", Pod:"csi-node-driver-2qg6j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali102b27fda64", MAC:"b6:1d:5f:09:c2:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:37.770904 containerd[2668]: 2025-05-17 00:14:37.769 [INFO][7720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51" Namespace="calico-system" Pod="csi-node-driver-2qg6j" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:37.782777 containerd[2668]: time="2025-05-17T00:14:37.782698094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:37.782777 containerd[2668]: time="2025-05-17T00:14:37.782762415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:37.782777 containerd[2668]: time="2025-05-17T00:14:37.782774215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.782895 containerd[2668]: time="2025-05-17T00:14:37.782854855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.787309 containerd[2668]: time="2025-05-17T00:14:37.787278103Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:37.787403 containerd[2668]: time="2025-05-17T00:14:37.787299983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:14:37.789839 containerd[2668]: time="2025-05-17T00:14:37.789807970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 67.897445ms" May 17 00:14:37.789868 containerd[2668]: time="2025-05-17T00:14:37.789847610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:14:37.789908 containerd[2668]: time="2025-05-17T00:14:37.789886651Z" level=info msg="StartContainer for \"440ce0a8e164d3a64608b7efccb9f768866d152e6eec974d07aa147613506025\" returns successfully" May 17 00:14:37.791277 containerd[2668]: time="2025-05-17T00:14:37.791254785Z" level=info msg="CreateContainer within sandbox \"96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:14:37.796038 containerd[2668]: time="2025-05-17T00:14:37.796008556Z" level=info msg="CreateContainer within sandbox \"96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e376922c00d8f945893b6901df57a52c6aa9d9383993bd37b64e0d0f0de2562\"" May 17 00:14:37.796356 containerd[2668]: time="2025-05-17T00:14:37.796335679Z" level=info msg="StartContainer for \"1e376922c00d8f945893b6901df57a52c6aa9d9383993bd37b64e0d0f0de2562\"" May 17 00:14:37.808739 systemd[1]: Started cri-containerd-7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51.scope - libcontainer container 7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51. May 17 00:14:37.811419 systemd[1]: Started cri-containerd-1e376922c00d8f945893b6901df57a52c6aa9d9383993bd37b64e0d0f0de2562.scope - libcontainer container 1e376922c00d8f945893b6901df57a52c6aa9d9383993bd37b64e0d0f0de2562. May 17 00:14:37.825354 containerd[2668]: time="2025-05-17T00:14:37.825326949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2qg6j,Uid:ffe0c287-a727-4d33-b8b4-1067d384be58,Namespace:calico-system,Attempt:1,} returns sandbox id \"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51\"" May 17 00:14:37.826347 containerd[2668]: time="2025-05-17T00:14:37.826323879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:14:37.838773 containerd[2668]: time="2025-05-17T00:14:37.838746732Z" level=info msg="StartContainer for \"1e376922c00d8f945893b6901df57a52c6aa9d9383993bd37b64e0d0f0de2562\" returns successfully" May 17 00:14:37.863211 systemd-networkd[2569]: calib4fa7c42f9c: Link UP May 17 00:14:37.863676 systemd-networkd[2569]: calib4fa7c42f9c: Gained carrier May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.726 [INFO][7747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0 calico-kube-controllers-57674c48cd- calico-system 92b8742a-1089-44d9-aa7e-9b92fa6f958f 933 0 2025-05-17 00:14:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57674c48cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 calico-kube-controllers-57674c48cd-bjqp6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4fa7c42f9c [] [] }} ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.726 [INFO][7747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.746 [INFO][7797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" HandleID="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.746 [INFO][7797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" HandleID="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40007a1390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"calico-kube-controllers-57674c48cd-bjqp6", "timestamp":"2025-05-17 00:14:37.746782231 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.746 [INFO][7797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.759 [INFO][7797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.759 [INFO][7797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.842 [INFO][7797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.846 [INFO][7797] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.849 [INFO][7797] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.850 [INFO][7797] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.852 [INFO][7797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.852 [INFO][7797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.853 [INFO][7797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.856 [INFO][7797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.860 [INFO][7797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.7/26] block=192.168.18.0/26 handle="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.860 [INFO][7797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.7/26] handle="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.860 [INFO][7797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:37.872591 containerd[2668]: 2025-05-17 00:14:37.860 [INFO][7797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.7/26] IPv6=[] ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" HandleID="k8s-pod-network.a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.872984 containerd[2668]: 2025-05-17 00:14:37.861 [INFO][7747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0", GenerateName:"calico-kube-controllers-57674c48cd-", Namespace:"calico-system", SelfLink:"", UID:"92b8742a-1089-44d9-aa7e-9b92fa6f958f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57674c48cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"calico-kube-controllers-57674c48cd-bjqp6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4fa7c42f9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:37.872984 containerd[2668]: 2025-05-17 00:14:37.861 [INFO][7747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.7/32] ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.872984 containerd[2668]: 2025-05-17 00:14:37.861 [INFO][7747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4fa7c42f9c ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.872984 containerd[2668]: 2025-05-17 00:14:37.863 [INFO][7747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.872984 containerd[2668]: 2025-05-17 00:14:37.865 [INFO][7747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0", GenerateName:"calico-kube-controllers-57674c48cd-", Namespace:"calico-system", SelfLink:"", UID:"92b8742a-1089-44d9-aa7e-9b92fa6f958f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57674c48cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf", Pod:"calico-kube-controllers-57674c48cd-bjqp6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4fa7c42f9c", MAC:"02:38:4c:97:bd:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:37.872984 containerd[2668]: 2025-05-17 00:14:37.871 [INFO][7747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf" Namespace="calico-system" Pod="calico-kube-controllers-57674c48cd-bjqp6" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:37.884800 containerd[2668]: time="2025-05-17T00:14:37.884707382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:37.884834 containerd[2668]: time="2025-05-17T00:14:37.884817223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:37.884855 containerd[2668]: time="2025-05-17T00:14:37.884830824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.884931 containerd[2668]: time="2025-05-17T00:14:37.884915704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:37.910821 systemd[1]: Started cri-containerd-a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf.scope - libcontainer container a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf. May 17 00:14:37.934546 containerd[2668]: time="2025-05-17T00:14:37.934517754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57674c48cd-bjqp6,Uid:92b8742a-1089-44d9-aa7e-9b92fa6f958f,Namespace:calico-system,Attempt:1,} returns sandbox id \"a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf\"" May 17 00:14:38.127129 containerd[2668]: time="2025-05-17T00:14:38.127052331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:38.127183 containerd[2668]: time="2025-05-17T00:14:38.127129372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 17 00:14:38.127860 containerd[2668]: time="2025-05-17T00:14:38.127839259Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:38.129687 containerd[2668]: time="2025-05-17T00:14:38.129664638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:38.130394 containerd[2668]: time="2025-05-17T00:14:38.130368245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 304.013605ms" May 17 00:14:38.130455 containerd[2668]: time="2025-05-17T00:14:38.130399566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 00:14:38.131266 containerd[2668]: time="2025-05-17T00:14:38.131246294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:14:38.132070 containerd[2668]: time="2025-05-17T00:14:38.132052463Z" level=info msg="CreateContainer within sandbox \"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:14:38.138386 containerd[2668]: time="2025-05-17T00:14:38.138353128Z" level=info msg="CreateContainer within sandbox \"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"23a4cd5a8caa3d538f130a2a95acfd35092ba4088862eca105e5e5c55d5b335a\"" May 17 00:14:38.138692 containerd[2668]: time="2025-05-17T00:14:38.138666451Z" level=info msg="StartContainer for \"23a4cd5a8caa3d538f130a2a95acfd35092ba4088862eca105e5e5c55d5b335a\"" May 17 00:14:38.166777 systemd[1]: Started cri-containerd-23a4cd5a8caa3d538f130a2a95acfd35092ba4088862eca105e5e5c55d5b335a.scope - libcontainer container 23a4cd5a8caa3d538f130a2a95acfd35092ba4088862eca105e5e5c55d5b335a. May 17 00:14:38.185982 containerd[2668]: time="2025-05-17T00:14:38.185950182Z" level=info msg="StartContainer for \"23a4cd5a8caa3d538f130a2a95acfd35092ba4088862eca105e5e5c55d5b335a\" returns successfully" May 17 00:14:38.215677 systemd-networkd[2569]: calic28700f392d: Gained IPv6LL May 17 00:14:38.343683 systemd-networkd[2569]: calie9ee66ca7c3: Gained IPv6LL May 17 00:14:38.471717 systemd-networkd[2569]: calif03fb2c83d4: Gained IPv6LL May 17 00:14:38.601116 containerd[2668]: time="2025-05-17T00:14:38.601074848Z" level=info msg="StopPodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\"" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.636 [INFO][8113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.637 [INFO][8113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" iface="eth0" netns="/var/run/netns/cni-a63658ec-9d6b-99bf-cdfb-a21e0abda04f" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.637 [INFO][8113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" iface="eth0" netns="/var/run/netns/cni-a63658ec-9d6b-99bf-cdfb-a21e0abda04f" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.637 [INFO][8113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" iface="eth0" netns="/var/run/netns/cni-a63658ec-9d6b-99bf-cdfb-a21e0abda04f" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.637 [INFO][8113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.637 [INFO][8113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.657 [INFO][8134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.657 [INFO][8134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.657 [INFO][8134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.664 [WARNING][8134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.664 [INFO][8134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.665 [INFO][8134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:38.678604 containerd[2668]: 2025-05-17 00:14:38.666 [INFO][8113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:38.678604 containerd[2668]: time="2025-05-17T00:14:38.675078376Z" level=info msg="TearDown network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" successfully" May 17 00:14:38.678604 containerd[2668]: time="2025-05-17T00:14:38.675109256Z" level=info msg="StopPodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" returns successfully" May 17 00:14:38.678604 containerd[2668]: time="2025-05-17T00:14:38.675672542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-q5w5n,Uid:c38caedc-750a-4223-9bb4-03b5231f627f,Namespace:calico-system,Attempt:1,}" May 17 00:14:38.677021 systemd[1]: run-netns-cni\x2da63658ec\x2d9d6b\x2d99bf\x2dcdfb\x2da21e0abda04f.mount: Deactivated successfully. May 17 00:14:38.711331 kubelet[4140]: I0517 00:14:38.711277 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84f7cc55c9-7jvwm" podStartSLOduration=21.83881337 podStartE2EDuration="22.711261191s" podCreationTimestamp="2025-05-17 00:14:16 +0000 UTC" firstStartedPulling="2025-05-17 00:14:36.917840714 +0000 UTC m=+39.386037890" lastFinishedPulling="2025-05-17 00:14:37.790288495 +0000 UTC m=+40.258485711" observedRunningTime="2025-05-17 00:14:38.71118167 +0000 UTC m=+41.179378846" watchObservedRunningTime="2025-05-17 00:14:38.711261191 +0000 UTC m=+41.179458407" May 17 00:14:38.718153 kubelet[4140]: I0517 00:14:38.718093 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84f7cc55c9-spxxt" podStartSLOduration=21.819756977 podStartE2EDuration="22.718077022s" podCreationTimestamp="2025-05-17 00:14:16 +0000 UTC" firstStartedPulling="2025-05-17 00:14:36.823405518 +0000 UTC m=+39.291602734" lastFinishedPulling="2025-05-17 00:14:37.721725563 +0000 UTC m=+40.189922779" observedRunningTime="2025-05-17 00:14:38.717990621 +0000 UTC m=+41.186187837" watchObservedRunningTime="2025-05-17 00:14:38.718077022 +0000 UTC m=+41.186274238" May 17 00:14:38.757995 systemd-networkd[2569]: cali352b1a6e2c3: Link UP May 17 00:14:38.758617 systemd-networkd[2569]: cali352b1a6e2c3: Gained carrier May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.708 [INFO][8162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0 goldmane-8f77d7b6c- calico-system c38caedc-750a-4223-9bb4-03b5231f627f 971 0 2025-05-17 00:14:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-3bfd76e738 goldmane-8f77d7b6c-q5w5n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali352b1a6e2c3 [] [] }} ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.708 [INFO][8162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.729 [INFO][8185] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" HandleID="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.729 [INFO][8185] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" HandleID="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e6420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-3bfd76e738", "pod":"goldmane-8f77d7b6c-q5w5n", "timestamp":"2025-05-17 00:14:38.729109496 +0000 UTC"}, Hostname:"ci-4081.3.3-n-3bfd76e738", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.729 [INFO][8185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.729 [INFO][8185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.729 [INFO][8185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-3bfd76e738' May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.737 [INFO][8185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.740 [INFO][8185] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.743 [INFO][8185] ipam/ipam.go 511: Trying affinity for 192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.744 [INFO][8185] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.746 [INFO][8185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.746 [INFO][8185] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.747 [INFO][8185] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606 May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.750 [INFO][8185] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.754 [INFO][8185] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.8/26] block=192.168.18.0/26 handle="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.754 [INFO][8185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.8/26] handle="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" host="ci-4081.3.3-n-3bfd76e738" May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.754 [INFO][8185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:38.766272 containerd[2668]: 2025-05-17 00:14:38.754 [INFO][8185] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.8/26] IPv6=[] ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" HandleID="k8s-pod-network.a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.766719 containerd[2668]: 2025-05-17 00:14:38.755 [INFO][8162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c38caedc-750a-4223-9bb4-03b5231f627f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"", Pod:"goldmane-8f77d7b6c-q5w5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali352b1a6e2c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:38.766719 containerd[2668]: 2025-05-17 00:14:38.755 [INFO][8162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.8/32] ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.766719 containerd[2668]: 2025-05-17 00:14:38.755 [INFO][8162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali352b1a6e2c3 ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.766719 containerd[2668]: 2025-05-17 00:14:38.758 [INFO][8162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.766719 containerd[2668]: 2025-05-17 00:14:38.758 [INFO][8162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c38caedc-750a-4223-9bb4-03b5231f627f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606", Pod:"goldmane-8f77d7b6c-q5w5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali352b1a6e2c3", MAC:"e2:b4:4b:42:16:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:38.766719 containerd[2668]: 2025-05-17 00:14:38.764 [INFO][8162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606" Namespace="calico-system" Pod="goldmane-8f77d7b6c-q5w5n" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:38.791341 containerd[2668]: time="2025-05-17T00:14:38.791280221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:38.791341 containerd[2668]: time="2025-05-17T00:14:38.791329902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:38.791423 containerd[2668]: time="2025-05-17T00:14:38.791341422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:38.791490 containerd[2668]: time="2025-05-17T00:14:38.791461783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:38.817819 systemd[1]: Started cri-containerd-a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606.scope - libcontainer container a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606. May 17 00:14:38.824733 containerd[2668]: time="2025-05-17T00:14:38.824683568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:38.824847 containerd[2668]: time="2025-05-17T00:14:38.824762969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 17 00:14:38.825506 containerd[2668]: time="2025-05-17T00:14:38.825482136Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:38.827290 containerd[2668]: time="2025-05-17T00:14:38.827232634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:38.827980 containerd[2668]: time="2025-05-17T00:14:38.827953682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 696.675467ms" May 17 00:14:38.828029 containerd[2668]: time="2025-05-17T00:14:38.827984922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 00:14:38.828750 containerd[2668]: time="2025-05-17T00:14:38.828727570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:14:38.833475 containerd[2668]: time="2025-05-17T00:14:38.833447899Z" level=info msg="CreateContainer within sandbox \"a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:14:38.838398 containerd[2668]: time="2025-05-17T00:14:38.838361670Z" level=info msg="CreateContainer within sandbox \"a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b750d9915af89d51cc9339078dc812af7fb97ab98c37fb0b9bc7b7ab0db76fc7\"" May 17 00:14:38.838746 containerd[2668]: time="2025-05-17T00:14:38.838718553Z" level=info msg="StartContainer for \"b750d9915af89d51cc9339078dc812af7fb97ab98c37fb0b9bc7b7ab0db76fc7\"" May 17 00:14:38.846844 containerd[2668]: time="2025-05-17T00:14:38.846809477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-q5w5n,Uid:c38caedc-750a-4223-9bb4-03b5231f627f,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606\"" May 17 00:14:38.856682 systemd-networkd[2569]: cali829f9ce8f67: Gained IPv6LL May 17 00:14:38.873777 systemd[1]: Started cri-containerd-b750d9915af89d51cc9339078dc812af7fb97ab98c37fb0b9bc7b7ab0db76fc7.scope - libcontainer container b750d9915af89d51cc9339078dc812af7fb97ab98c37fb0b9bc7b7ab0db76fc7. May 17 00:14:38.900647 containerd[2668]: time="2025-05-17T00:14:38.900606275Z" level=info msg="StartContainer for \"b750d9915af89d51cc9339078dc812af7fb97ab98c37fb0b9bc7b7ab0db76fc7\" returns successfully" May 17 00:14:39.168986 containerd[2668]: time="2025-05-17T00:14:39.168946091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:39.169096 containerd[2668]: time="2025-05-17T00:14:39.169013252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 17 00:14:39.169737 containerd[2668]: time="2025-05-17T00:14:39.169717139Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:39.171653 containerd[2668]: time="2025-05-17T00:14:39.171622558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:39.172361 containerd[2668]: time="2025-05-17T00:14:39.172335125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 343.552955ms" May 17 00:14:39.172385 containerd[2668]: time="2025-05-17T00:14:39.172368486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 00:14:39.173128 containerd[2668]: time="2025-05-17T00:14:39.173108373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:14:39.173990 containerd[2668]: time="2025-05-17T00:14:39.173966502Z" level=info msg="CreateContainer within sandbox \"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:14:39.179783 containerd[2668]: time="2025-05-17T00:14:39.179745360Z" level=info msg="CreateContainer within sandbox \"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"261f6ace05df3f3ebfcec345b6354173e3146a443917fb409396b7cadc575808\"" May 17 00:14:39.180110 containerd[2668]: time="2025-05-17T00:14:39.180083243Z" level=info msg="StartContainer for \"261f6ace05df3f3ebfcec345b6354173e3146a443917fb409396b7cadc575808\"" May 17 00:14:39.197472 containerd[2668]: time="2025-05-17T00:14:39.197425618Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:39.197730 containerd[2668]: time="2025-05-17T00:14:39.197662141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:39.197730 containerd[2668]: time="2025-05-17T00:14:39.197726581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:14:39.198132 kubelet[4140]: E0517 00:14:39.197826 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:14:39.198132 kubelet[4140]: E0517 00:14:39.197871 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:14:39.198132 kubelet[4140]: E0517 00:14:39.197980 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jhx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:39.199817 kubelet[4140]: E0517 00:14:39.199787 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:14:39.216719 systemd[1]: Started cri-containerd-261f6ace05df3f3ebfcec345b6354173e3146a443917fb409396b7cadc575808.scope - libcontainer container 261f6ace05df3f3ebfcec345b6354173e3146a443917fb409396b7cadc575808. May 17 00:14:39.235941 containerd[2668]: time="2025-05-17T00:14:39.235906606Z" level=info msg="StartContainer for \"261f6ace05df3f3ebfcec345b6354173e3146a443917fb409396b7cadc575808\" returns successfully" May 17 00:14:39.559710 systemd-networkd[2569]: cali102b27fda64: Gained IPv6LL May 17 00:14:39.623694 systemd-networkd[2569]: calib4fa7c42f9c: Gained IPv6LL May 17 00:14:39.648124 kubelet[4140]: I0517 00:14:39.648011 4140 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:14:39.648124 kubelet[4140]: I0517 00:14:39.648049 4140 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:14:39.709577 kubelet[4140]: E0517 00:14:39.709541 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:14:39.712813 kubelet[4140]: I0517 00:14:39.712765 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57674c48cd-bjqp6" podStartSLOduration=18.81948949 podStartE2EDuration="19.712751696s" podCreationTimestamp="2025-05-17 00:14:20 +0000 UTC" firstStartedPulling="2025-05-17 00:14:37.935333562 +0000 UTC m=+40.403530778" lastFinishedPulling="2025-05-17 00:14:38.828595808 +0000 UTC m=+41.296792984" observedRunningTime="2025-05-17 00:14:39.712536974 +0000 UTC m=+42.180734190" watchObservedRunningTime="2025-05-17 00:14:39.712751696 +0000 UTC m=+42.180948912" May 17 00:14:39.720101 kubelet[4140]: I0517 00:14:39.720059 4140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2qg6j" podStartSLOduration=18.373202635 podStartE2EDuration="19.72004465s" podCreationTimestamp="2025-05-17 00:14:20 +0000 UTC" firstStartedPulling="2025-05-17 00:14:37.826131597 +0000 UTC m=+40.294328813" lastFinishedPulling="2025-05-17 00:14:39.172973612 +0000 UTC m=+41.641170828" observedRunningTime="2025-05-17 00:14:39.719919488 +0000 UTC m=+42.188116704" watchObservedRunningTime="2025-05-17 00:14:39.72004465 +0000 UTC m=+42.188241866" May 17 00:14:40.601931 containerd[2668]: time="2025-05-17T00:14:40.601886737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:14:40.627319 containerd[2668]: time="2025-05-17T00:14:40.627264506Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:40.627548 containerd[2668]: time="2025-05-17T00:14:40.627517069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:40.627615 containerd[2668]: time="2025-05-17T00:14:40.627583989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:14:40.627688 kubelet[4140]: E0517 00:14:40.627656 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:14:40.627776 kubelet[4140]: E0517 00:14:40.627695 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:14:40.627811 kubelet[4140]: E0517 00:14:40.627780 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7cef1fccdb484787957099af255476,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:40.629371 containerd[2668]: time="2025-05-17T00:14:40.629351127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:14:40.652130 containerd[2668]: time="2025-05-17T00:14:40.652091110Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:40.652336 containerd[2668]: time="2025-05-17T00:14:40.652311032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:40.652397 containerd[2668]: time="2025-05-17T00:14:40.652346952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:14:40.652482 kubelet[4140]: E0517 00:14:40.652445 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:14:40.652529 kubelet[4140]: E0517 00:14:40.652492 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:14:40.652652 kubelet[4140]: E0517 00:14:40.652620 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:40.653789 kubelet[4140]: E0517 00:14:40.653756 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:14:40.711095 kubelet[4140]: E0517 00:14:40.711067 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:14:40.775717 systemd-networkd[2569]: cali352b1a6e2c3: Gained IPv6LL May 17 00:14:51.601523 kubelet[4140]: E0517 00:14:51.601435 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:14:54.601351 containerd[2668]: time="2025-05-17T00:14:54.601302129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:14:54.626546 containerd[2668]: time="2025-05-17T00:14:54.626492699Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:14:54.637226 containerd[2668]: time="2025-05-17T00:14:54.637188970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:14:54.637280 containerd[2668]: time="2025-05-17T00:14:54.637244611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:14:54.637447 kubelet[4140]: E0517 00:14:54.637385 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:14:54.637740 kubelet[4140]: E0517 00:14:54.637454 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:14:54.637740 kubelet[4140]: E0517 00:14:54.637646 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jhx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:14:54.638818 kubelet[4140]: E0517 00:14:54.638791 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:14:57.593846 containerd[2668]: time="2025-05-17T00:14:57.593804251Z" level=info msg="StopPodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\"" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.624 [WARNING][8453] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffe0c287-a727-4d33-b8b4-1067d384be58", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51", Pod:"csi-node-driver-2qg6j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali102b27fda64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.624 [INFO][8453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.624 [INFO][8453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" iface="eth0" netns="" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.624 [INFO][8453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.624 [INFO][8453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.641 [INFO][8473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.641 [INFO][8473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.641 [INFO][8473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.649 [WARNING][8473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.649 [INFO][8473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.650 [INFO][8473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.653148 containerd[2668]: 2025-05-17 00:14:57.651 [INFO][8453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.653483 containerd[2668]: time="2025-05-17T00:14:57.653181900Z" level=info msg="TearDown network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" successfully" May 17 00:14:57.653483 containerd[2668]: time="2025-05-17T00:14:57.653208620Z" level=info msg="StopPodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" returns successfully" May 17 00:14:57.653559 containerd[2668]: time="2025-05-17T00:14:57.653533382Z" level=info msg="RemovePodSandbox for \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\"" May 17 00:14:57.653600 containerd[2668]: time="2025-05-17T00:14:57.653567863Z" level=info msg="Forcibly stopping sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\"" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.683 [WARNING][8498] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffe0c287-a727-4d33-b8b4-1067d384be58", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"7af18049be2f6f7fcb3cbca7d6143be607d34669885288bae4dce03db7168b51", Pod:"csi-node-driver-2qg6j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali102b27fda64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.684 [INFO][8498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.684 [INFO][8498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" iface="eth0" netns="" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.684 [INFO][8498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.684 [INFO][8498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.701 [INFO][8518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.701 [INFO][8518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.701 [INFO][8518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.709 [WARNING][8518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.709 [INFO][8518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" HandleID="k8s-pod-network.55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-csi--node--driver--2qg6j-eth0" May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.710 [INFO][8518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.712747 containerd[2668]: 2025-05-17 00:14:57.711 [INFO][8498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f" May 17 00:14:57.713067 containerd[2668]: time="2025-05-17T00:14:57.712774190Z" level=info msg="TearDown network for sandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" successfully" May 17 00:14:57.714343 containerd[2668]: time="2025-05-17T00:14:57.714315440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:57.714381 containerd[2668]: time="2025-05-17T00:14:57.714369520Z" level=info msg="RemovePodSandbox \"55be51626eb856ce7f57bfedbf5439303b3177d6406dede2cbd1dba5790a1b6f\" returns successfully" May 17 00:14:57.714751 containerd[2668]: time="2025-05-17T00:14:57.714730682Z" level=info msg="StopPodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\"" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.746 [WARNING][8547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"46e0c7e1-4d50-450c-9360-f91ac6e40f72", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456", Pod:"coredns-7c65d6cfc9-28zzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28700f392d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.746 [INFO][8547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.746 [INFO][8547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" iface="eth0" netns="" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.746 [INFO][8547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.746 [INFO][8547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.763 [INFO][8568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.763 [INFO][8568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.763 [INFO][8568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.770 [WARNING][8568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.770 [INFO][8568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.771 [INFO][8568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.774298 containerd[2668]: 2025-05-17 00:14:57.772 [INFO][8547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.774636 containerd[2668]: time="2025-05-17T00:14:57.774327373Z" level=info msg="TearDown network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" successfully" May 17 00:14:57.774636 containerd[2668]: time="2025-05-17T00:14:57.774347013Z" level=info msg="StopPodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" returns successfully" May 17 00:14:57.774636 containerd[2668]: time="2025-05-17T00:14:57.774600934Z" level=info msg="RemovePodSandbox for \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\"" May 17 00:14:57.774636 containerd[2668]: time="2025-05-17T00:14:57.774633535Z" level=info msg="Forcibly stopping sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\"" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.804 [WARNING][8597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"46e0c7e1-4d50-450c-9360-f91ac6e40f72", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"fc07b97f5d97aa40618c3dbc7147eab9138a92b0125d9ad4c253d6077e7c6456", Pod:"coredns-7c65d6cfc9-28zzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28700f392d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.804 [INFO][8597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.804 [INFO][8597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" iface="eth0" netns="" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.804 [INFO][8597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.804 [INFO][8597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.821 [INFO][8616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.821 [INFO][8616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.821 [INFO][8616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.829 [WARNING][8616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.829 [INFO][8616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" HandleID="k8s-pod-network.1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--28zzb-eth0" May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.830 [INFO][8616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.833435 containerd[2668]: 2025-05-17 00:14:57.832 [INFO][8597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4" May 17 00:14:57.833817 containerd[2668]: time="2025-05-17T00:14:57.833454420Z" level=info msg="TearDown network for sandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" successfully" May 17 00:14:57.834970 containerd[2668]: time="2025-05-17T00:14:57.834943109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:57.835044 containerd[2668]: time="2025-05-17T00:14:57.834996430Z" level=info msg="RemovePodSandbox \"1f9da63d85cd01330c5de11e278a9aa01beaaeb6b6600a734af855489af153f4\" returns successfully" May 17 00:14:57.835308 containerd[2668]: time="2025-05-17T00:14:57.835290271Z" level=info msg="StopPodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\"" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.864 [WARNING][8646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0", GenerateName:"calico-kube-controllers-57674c48cd-", Namespace:"calico-system", SelfLink:"", UID:"92b8742a-1089-44d9-aa7e-9b92fa6f958f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57674c48cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf", Pod:"calico-kube-controllers-57674c48cd-bjqp6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4fa7c42f9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.864 [INFO][8646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.864 [INFO][8646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" iface="eth0" netns="" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.864 [INFO][8646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.864 [INFO][8646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.895 [INFO][8667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.895 [INFO][8667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.895 [INFO][8667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.903 [WARNING][8667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.903 [INFO][8667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.903 [INFO][8667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.906542 containerd[2668]: 2025-05-17 00:14:57.905 [INFO][8646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.906542 containerd[2668]: time="2025-05-17T00:14:57.906518754Z" level=info msg="TearDown network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" successfully" May 17 00:14:57.906542 containerd[2668]: time="2025-05-17T00:14:57.906540394Z" level=info msg="StopPodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" returns successfully" May 17 00:14:57.907012 containerd[2668]: time="2025-05-17T00:14:57.906986157Z" level=info msg="RemovePodSandbox for \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\"" May 17 00:14:57.907046 containerd[2668]: time="2025-05-17T00:14:57.907016957Z" level=info msg="Forcibly stopping sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\"" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.939 [WARNING][8692] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0", GenerateName:"calico-kube-controllers-57674c48cd-", Namespace:"calico-system", SelfLink:"", UID:"92b8742a-1089-44d9-aa7e-9b92fa6f958f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57674c48cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"a942c0675294267cc8459d3439dbb2703f1c2f056bf32da94561598704c388bf", Pod:"calico-kube-controllers-57674c48cd-bjqp6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4fa7c42f9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.939 [INFO][8692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.939 [INFO][8692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" iface="eth0" netns="" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.939 [INFO][8692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.940 [INFO][8692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.957 [INFO][8711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.957 [INFO][8711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.957 [INFO][8711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.964 [WARNING][8711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.964 [INFO][8711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" HandleID="k8s-pod-network.876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--kube--controllers--57674c48cd--bjqp6-eth0" May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.965 [INFO][8711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:57.967923 containerd[2668]: 2025-05-17 00:14:57.966 [INFO][8692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88" May 17 00:14:57.968304 containerd[2668]: time="2025-05-17T00:14:57.967963856Z" level=info msg="TearDown network for sandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" successfully" May 17 00:14:57.969457 containerd[2668]: time="2025-05-17T00:14:57.969433305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:57.969495 containerd[2668]: time="2025-05-17T00:14:57.969482505Z" level=info msg="RemovePodSandbox \"876c822175b7be2c2e2e435ef031b4f3ae156e5ddbcd3db00778e7b980598f88\" returns successfully" May 17 00:14:57.969915 containerd[2668]: time="2025-05-17T00:14:57.969893908Z" level=info msg="StopPodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\"" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:57.999 [WARNING][8743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e476acd-d55f-474e-8792-7a4ffbe093e3", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d", Pod:"calico-apiserver-84f7cc55c9-spxxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif03fb2c83d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:57.999 [INFO][8743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:57.999 [INFO][8743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" iface="eth0" netns="" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:57.999 [INFO][8743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:57.999 [INFO][8743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.016 [INFO][8764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.017 [INFO][8764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.017 [INFO][8764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.024 [WARNING][8764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.024 [INFO][8764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.025 [INFO][8764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.027801 containerd[2668]: 2025-05-17 00:14:58.026 [INFO][8743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.028157 containerd[2668]: time="2025-05-17T00:14:58.027836583Z" level=info msg="TearDown network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" successfully" May 17 00:14:58.028157 containerd[2668]: time="2025-05-17T00:14:58.027861143Z" level=info msg="StopPodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" returns successfully" May 17 00:14:58.028301 containerd[2668]: time="2025-05-17T00:14:58.028268506Z" level=info msg="RemovePodSandbox for \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\"" May 17 00:14:58.028328 containerd[2668]: time="2025-05-17T00:14:58.028311666Z" level=info msg="Forcibly stopping sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\"" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.058 [WARNING][8796] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e476acd-d55f-474e-8792-7a4ffbe093e3", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"555912c4f396e123d4a9954c83e4c0075599883216de4bfc762883bd810a211d", Pod:"calico-apiserver-84f7cc55c9-spxxt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif03fb2c83d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.058 [INFO][8796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.058 [INFO][8796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" iface="eth0" netns="" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.058 [INFO][8796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.058 [INFO][8796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.075 [INFO][8818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.075 [INFO][8818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.075 [INFO][8818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.082 [WARNING][8818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.083 [INFO][8818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" HandleID="k8s-pod-network.79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--spxxt-eth0" May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.083 [INFO][8818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.086613 containerd[2668]: 2025-05-17 00:14:58.085 [INFO][8796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e" May 17 00:14:58.086994 containerd[2668]: time="2025-05-17T00:14:58.086667100Z" level=info msg="TearDown network for sandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" successfully" May 17 00:14:58.088147 containerd[2668]: time="2025-05-17T00:14:58.088124268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:58.088214 containerd[2668]: time="2025-05-17T00:14:58.088178109Z" level=info msg="RemovePodSandbox \"79f7cde8b4fd2216170741d2978117e6737abc60ef28831a1aa6ad9b04c3eb5e\" returns successfully" May 17 00:14:58.091762 containerd[2668]: time="2025-05-17T00:14:58.091735970Z" level=info msg="StopPodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\"" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.121 [WARNING][8847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c38caedc-750a-4223-9bb4-03b5231f627f", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606", Pod:"goldmane-8f77d7b6c-q5w5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali352b1a6e2c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.122 [INFO][8847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.122 [INFO][8847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" iface="eth0" netns="" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.122 [INFO][8847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.122 [INFO][8847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.138 [INFO][8865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.139 [INFO][8865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.139 [INFO][8865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.146 [WARNING][8865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.146 [INFO][8865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.147 [INFO][8865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.150018 containerd[2668]: 2025-05-17 00:14:58.148 [INFO][8847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.150018 containerd[2668]: time="2025-05-17T00:14:58.150018403Z" level=info msg="TearDown network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" successfully" May 17 00:14:58.150422 containerd[2668]: time="2025-05-17T00:14:58.150040603Z" level=info msg="StopPodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" returns successfully" May 17 00:14:58.150422 containerd[2668]: time="2025-05-17T00:14:58.150371565Z" level=info msg="RemovePodSandbox for \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\"" May 17 00:14:58.150422 containerd[2668]: time="2025-05-17T00:14:58.150401565Z" level=info msg="Forcibly stopping sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\"" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.179 [WARNING][8900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c38caedc-750a-4223-9bb4-03b5231f627f", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"a0c8e17e864b3925b22cffb70b9da05bb962c8ed69a16f42937a6cee5f40d606", Pod:"goldmane-8f77d7b6c-q5w5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali352b1a6e2c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.179 [INFO][8900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.179 [INFO][8900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" iface="eth0" netns="" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.179 [INFO][8900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.179 [INFO][8900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.196 [INFO][8921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.196 [INFO][8921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.196 [INFO][8921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.206 [WARNING][8921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.206 [INFO][8921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" HandleID="k8s-pod-network.12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" Workload="ci--4081.3.3--n--3bfd76e738-k8s-goldmane--8f77d7b6c--q5w5n-eth0" May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.207 [INFO][8921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.210303 containerd[2668]: 2025-05-17 00:14:58.208 [INFO][8900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f" May 17 00:14:58.210775 containerd[2668]: time="2025-05-17T00:14:58.210747491Z" level=info msg="TearDown network for sandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" successfully" May 17 00:14:58.212871 containerd[2668]: time="2025-05-17T00:14:58.212824863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:58.212964 containerd[2668]: time="2025-05-17T00:14:58.212891744Z" level=info msg="RemovePodSandbox \"12f4eb18dc3f261110d695ea5d94890c6deec2508a25ab98ac9e2b53a077b34f\" returns successfully" May 17 00:14:58.213333 containerd[2668]: time="2025-05-17T00:14:58.213308386Z" level=info msg="StopPodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\"" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.244 [WARNING][8952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e", Pod:"calico-apiserver-84f7cc55c9-7jvwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali829f9ce8f67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.244 [INFO][8952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.244 [INFO][8952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" iface="eth0" netns="" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.244 [INFO][8952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.244 [INFO][8952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.261 [INFO][8972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.261 [INFO][8972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.261 [INFO][8972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.269 [WARNING][8972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.269 [INFO][8972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.270 [INFO][8972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.273343 containerd[2668]: 2025-05-17 00:14:58.271 [INFO][8952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.273766 containerd[2668]: time="2025-05-17T00:14:58.273393230Z" level=info msg="TearDown network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" successfully" May 17 00:14:58.273766 containerd[2668]: time="2025-05-17T00:14:58.273420830Z" level=info msg="StopPodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" returns successfully" May 17 00:14:58.273816 containerd[2668]: time="2025-05-17T00:14:58.273768832Z" level=info msg="RemovePodSandbox for \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\"" May 17 00:14:58.273816 containerd[2668]: time="2025-05-17T00:14:58.273802793Z" level=info msg="Forcibly stopping sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\"" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.303 [WARNING][9001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0", GenerateName:"calico-apiserver-84f7cc55c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a7c4f20-f66d-476e-aedc-1d3cd8a51bc2", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f7cc55c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"96942d5f433600889c9b33fb06c2dc67e884fe91f6b8e1d6d165d5454f269d3e", Pod:"calico-apiserver-84f7cc55c9-7jvwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali829f9ce8f67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.303 [INFO][9001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.303 [INFO][9001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" iface="eth0" netns="" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.303 [INFO][9001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.303 [INFO][9001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.321 [INFO][9021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.321 [INFO][9021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.321 [INFO][9021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.328 [WARNING][9021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.328 [INFO][9021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" HandleID="k8s-pod-network.9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" Workload="ci--4081.3.3--n--3bfd76e738-k8s-calico--apiserver--84f7cc55c9--7jvwm-eth0" May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.329 [INFO][9021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.332073 containerd[2668]: 2025-05-17 00:14:58.330 [INFO][9001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed" May 17 00:14:58.332351 containerd[2668]: time="2025-05-17T00:14:58.332113746Z" level=info msg="TearDown network for sandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" successfully" May 17 00:14:58.333643 containerd[2668]: time="2025-05-17T00:14:58.333610435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:58.333677 containerd[2668]: time="2025-05-17T00:14:58.333665155Z" level=info msg="RemovePodSandbox \"9b02e513f07fc2a3a86a6b0678686a81ac3cf58d104604cdd7d99d394b8ecbed\" returns successfully" May 17 00:14:58.334004 containerd[2668]: time="2025-05-17T00:14:58.333975637Z" level=info msg="StopPodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\"" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.363 [WARNING][9053] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.363 [INFO][9053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.363 [INFO][9053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" iface="eth0" netns="" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.363 [INFO][9053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.363 [INFO][9053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.379 [INFO][9074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.380 [INFO][9074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.380 [INFO][9074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.387 [WARNING][9074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.387 [INFO][9074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.388 [INFO][9074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.390904 containerd[2668]: 2025-05-17 00:14:58.389 [INFO][9053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.391139 containerd[2668]: time="2025-05-17T00:14:58.390941382Z" level=info msg="TearDown network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" successfully" May 17 00:14:58.391139 containerd[2668]: time="2025-05-17T00:14:58.390963782Z" level=info msg="StopPodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" returns successfully" May 17 00:14:58.391338 containerd[2668]: time="2025-05-17T00:14:58.391311344Z" level=info msg="RemovePodSandbox for \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\"" May 17 00:14:58.391361 containerd[2668]: time="2025-05-17T00:14:58.391344664Z" level=info msg="Forcibly stopping sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\"" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.419 [WARNING][9102] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" WorkloadEndpoint="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.420 [INFO][9102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.420 [INFO][9102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" iface="eth0" netns="" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.420 [INFO][9102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.420 [INFO][9102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.437 [INFO][9121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.437 [INFO][9121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.437 [INFO][9121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.445 [WARNING][9121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.445 [INFO][9121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" HandleID="k8s-pod-network.b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" Workload="ci--4081.3.3--n--3bfd76e738-k8s-whisker--78fd6f9455--vxl45-eth0" May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.446 [INFO][9121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.449128 containerd[2668]: 2025-05-17 00:14:58.447 [INFO][9102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c" May 17 00:14:58.449472 containerd[2668]: time="2025-05-17T00:14:58.449173814Z" level=info msg="TearDown network for sandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" successfully" May 17 00:14:58.450659 containerd[2668]: time="2025-05-17T00:14:58.450628943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:58.450704 containerd[2668]: time="2025-05-17T00:14:58.450690504Z" level=info msg="RemovePodSandbox \"b6f19307674d139017e31a106996fc876cd6145e0c16b1f627665d979992307c\" returns successfully" May 17 00:14:58.451080 containerd[2668]: time="2025-05-17T00:14:58.451051706Z" level=info msg="StopPodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\"" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.481 [WARNING][9153] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0e09e03c-e3f6-450c-916f-1c8c1d6def9e", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104", Pod:"coredns-7c65d6cfc9-st4g5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ee66ca7c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.481 [INFO][9153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.481 [INFO][9153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" iface="eth0" netns="" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.481 [INFO][9153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.481 [INFO][9153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.498 [INFO][9174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.498 [INFO][9174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.498 [INFO][9174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.507 [WARNING][9174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.507 [INFO][9174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.508 [INFO][9174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.511134 containerd[2668]: 2025-05-17 00:14:58.509 [INFO][9153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.511134 containerd[2668]: time="2025-05-17T00:14:58.511123910Z" level=info msg="TearDown network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" successfully" May 17 00:14:58.511526 containerd[2668]: time="2025-05-17T00:14:58.511143430Z" level=info msg="StopPodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" returns successfully" May 17 00:14:58.511526 containerd[2668]: time="2025-05-17T00:14:58.511461232Z" level=info msg="RemovePodSandbox for \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\"" May 17 00:14:58.511526 containerd[2668]: time="2025-05-17T00:14:58.511499312Z" level=info msg="Forcibly stopping sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\"" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.541 [WARNING][9204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0e09e03c-e3f6-450c-916f-1c8c1d6def9e", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 14, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-3bfd76e738", ContainerID:"629cc8719d173638c2c9744d4a0704390b3ad7efde7f8dc4bf41ebb756ef9104", Pod:"coredns-7c65d6cfc9-st4g5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9ee66ca7c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.541 [INFO][9204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.541 [INFO][9204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" iface="eth0" netns="" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.541 [INFO][9204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.541 [INFO][9204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.558 [INFO][9224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.558 [INFO][9224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.558 [INFO][9224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.566 [WARNING][9224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.566 [INFO][9224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" HandleID="k8s-pod-network.b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" Workload="ci--4081.3.3--n--3bfd76e738-k8s-coredns--7c65d6cfc9--st4g5-eth0" May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.567 [INFO][9224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:14:58.570299 containerd[2668]: 2025-05-17 00:14:58.569 [INFO][9204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4" May 17 00:14:58.570642 containerd[2668]: time="2025-05-17T00:14:58.570335988Z" level=info msg="TearDown network for sandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" successfully" May 17 00:14:58.571832 containerd[2668]: time="2025-05-17T00:14:58.571800717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:14:58.571863 containerd[2668]: time="2025-05-17T00:14:58.571852277Z" level=info msg="RemovePodSandbox \"b5f2100e8e1d9af40531882c4168312a2c119ce1688485c6c12c960d38b389c4\" returns successfully" May 17 00:15:06.601276 containerd[2668]: time="2025-05-17T00:15:06.601236194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:06.625828 containerd[2668]: time="2025-05-17T00:15:06.625747516Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:06.626048 containerd[2668]: time="2025-05-17T00:15:06.626026877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:06.626113 containerd[2668]: time="2025-05-17T00:15:06.626092758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:06.626243 kubelet[4140]: E0517 00:15:06.626202 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:06.626521 kubelet[4140]: E0517 00:15:06.626256 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:06.626521 kubelet[4140]: E0517 00:15:06.626346 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7cef1fccdb484787957099af255476,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:06.628008 containerd[2668]: time="2025-05-17T00:15:06.627990207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:06.653849 containerd[2668]: time="2025-05-17T00:15:06.653807935Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:06.661243 containerd[2668]: time="2025-05-17T00:15:06.661200732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:06.661304 containerd[2668]: time="2025-05-17T00:15:06.661271452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:06.661377 kubelet[4140]: E0517 00:15:06.661347 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:06.661452 kubelet[4140]: E0517 00:15:06.661376 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:06.661523 kubelet[4140]: E0517 00:15:06.661441 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:06.662650 kubelet[4140]: E0517 00:15:06.662607 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:15:07.601853 kubelet[4140]: E0517 00:15:07.601817 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:15:18.601240 containerd[2668]: time="2025-05-17T00:15:18.601190956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:15:18.601737 kubelet[4140]: E0517 00:15:18.601277 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:15:18.626450 containerd[2668]: time="2025-05-17T00:15:18.626393731Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:18.626693 containerd[2668]: time="2025-05-17T00:15:18.626666532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:18.626765 containerd[2668]: time="2025-05-17T00:15:18.626738812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:15:18.626886 kubelet[4140]: E0517 00:15:18.626839 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:18.626948 kubelet[4140]: E0517 00:15:18.626901 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:15:18.627066 kubelet[4140]: E0517 00:15:18.627022 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jhx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:18.628176 kubelet[4140]: E0517 00:15:18.628154 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:15:29.601443 kubelet[4140]: E0517 00:15:29.601387 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:15:32.600847 kubelet[4140]: E0517 00:15:32.600797 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:15:44.601752 kubelet[4140]: E0517 00:15:44.601710 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:15:46.563083 systemd[1]: Started sshd@9-147.28.151.230:22-218.92.0.158:13991.service - OpenSSH per-connection server daemon (218.92.0.158:13991). May 17 00:15:46.601673 kubelet[4140]: E0517 00:15:46.601617 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:15:48.231410 sshd[9393]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:15:49.787069 sshd[9391]: PAM: Permission denied for root from 218.92.0.158 May 17 00:15:50.236626 sshd[9400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:15:52.399411 sshd[9391]: PAM: Permission denied for root from 218.92.0.158 May 17 00:15:52.849169 sshd[9401]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:15:54.955683 sshd[9391]: PAM: Permission denied for root from 218.92.0.158 May 17 00:15:55.180697 sshd[9391]: Received disconnect from 218.92.0.158 port 13991:11: [preauth] May 17 00:15:55.180697 sshd[9391]: Disconnected from authenticating user root 218.92.0.158 port 13991 [preauth] May 17 00:15:55.183230 systemd[1]: sshd@9-147.28.151.230:22-218.92.0.158:13991.service: Deactivated successfully. May 17 00:15:59.602189 containerd[2668]: time="2025-05-17T00:15:59.602106308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:15:59.720748 containerd[2668]: time="2025-05-17T00:15:59.720705572Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:59.720960 containerd[2668]: time="2025-05-17T00:15:59.720938572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:59.721024 containerd[2668]: time="2025-05-17T00:15:59.721003492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:15:59.721143 kubelet[4140]: E0517 00:15:59.721096 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:59.721397 kubelet[4140]: E0517 00:15:59.721153 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:15:59.721397 kubelet[4140]: E0517 00:15:59.721257 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7cef1fccdb484787957099af255476,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:59.722925 containerd[2668]: time="2025-05-17T00:15:59.722908776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:15:59.747280 containerd[2668]: time="2025-05-17T00:15:59.747241182Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:15:59.747509 containerd[2668]: time="2025-05-17T00:15:59.747485102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:15:59.747570 containerd[2668]: time="2025-05-17T00:15:59.747549063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:15:59.747677 kubelet[4140]: E0517 00:15:59.747637 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:59.747730 kubelet[4140]: E0517 00:15:59.747685 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:15:59.747838 kubelet[4140]: E0517 00:15:59.747803 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:15:59.748976 kubelet[4140]: E0517 00:15:59.748947 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:16:00.601727 containerd[2668]: time="2025-05-17T00:16:00.601706542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:16:00.642753 containerd[2668]: time="2025-05-17T00:16:00.642713378Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:16:00.643077 containerd[2668]: time="2025-05-17T00:16:00.642933859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:16:00.643077 containerd[2668]: time="2025-05-17T00:16:00.643001899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:16:00.643130 kubelet[4140]: E0517 00:16:00.643063 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:16:00.643130 kubelet[4140]: E0517 00:16:00.643097 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:16:00.643225 kubelet[4140]: E0517 00:16:00.643190 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jhx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:16:00.644355 kubelet[4140]: E0517 00:16:00.644329 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:16:11.601959 kubelet[4140]: E0517 00:16:11.601911 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:16:12.601691 kubelet[4140]: E0517 00:16:12.601657 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:16:23.601305 kubelet[4140]: E0517 00:16:23.601259 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:16:23.601774 kubelet[4140]: E0517 00:16:23.601475 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:16:36.601315 kubelet[4140]: E0517 00:16:36.601274 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:16:38.601642 kubelet[4140]: E0517 00:16:38.601602 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:16:49.601689 kubelet[4140]: E0517 00:16:49.601639 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:16:49.602136 kubelet[4140]: E0517 00:16:49.601956 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:17:01.601815 kubelet[4140]: E0517 00:17:01.601759 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:17:04.601528 kubelet[4140]: E0517 00:17:04.601474 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:17:12.600658 kubelet[4140]: E0517 00:17:12.600600 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:17:19.602217 kubelet[4140]: E0517 00:17:19.602155 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:17:24.601985 containerd[2668]: time="2025-05-17T00:17:24.601943011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:17:24.840767 containerd[2668]: time="2025-05-17T00:17:24.840634430Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:24.840924 containerd[2668]: time="2025-05-17T00:17:24.840900470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:24.840994 containerd[2668]: time="2025-05-17T00:17:24.840969390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:17:24.841150 kubelet[4140]: E0517 00:17:24.841095 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:24.841416 kubelet[4140]: E0517 00:17:24.841166 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:17:24.841416 kubelet[4140]: E0517 00:17:24.841285 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jhx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:24.842460 kubelet[4140]: E0517 00:17:24.842431 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:17:34.601430 containerd[2668]: time="2025-05-17T00:17:34.601375236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:17:34.627964 containerd[2668]: time="2025-05-17T00:17:34.627916197Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:34.628191 containerd[2668]: time="2025-05-17T00:17:34.628161878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:34.628258 containerd[2668]: time="2025-05-17T00:17:34.628234758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:17:34.628342 kubelet[4140]: E0517 00:17:34.628310 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:34.628573 kubelet[4140]: E0517 00:17:34.628351 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:17:34.628573 kubelet[4140]: E0517 00:17:34.628433 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7cef1fccdb484787957099af255476,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:34.630072 containerd[2668]: time="2025-05-17T00:17:34.630056441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:17:34.651771 containerd[2668]: time="2025-05-17T00:17:34.651730074Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:17:34.660684 containerd[2668]: time="2025-05-17T00:17:34.660644008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:17:34.660761 containerd[2668]: time="2025-05-17T00:17:34.660725488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:17:34.660857 kubelet[4140]: E0517 00:17:34.660813 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:34.660933 kubelet[4140]: E0517 00:17:34.660866 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:17:34.661026 kubelet[4140]: E0517 00:17:34.660991 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:17:34.662158 kubelet[4140]: E0517 00:17:34.662128 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:17:39.601435 kubelet[4140]: E0517 00:17:39.601387 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:17:45.601682 kubelet[4140]: E0517 00:17:45.601633 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:17:54.601424 kubelet[4140]: E0517 00:17:54.601326 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:17:57.603946 kubelet[4140]: E0517 00:17:57.603909 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:18:07.601568 kubelet[4140]: E0517 00:18:07.601505 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:18:09.439061 systemd[1]: Started sshd@10-147.28.151.230:22-218.92.0.158:60601.service - OpenSSH per-connection server daemon (218.92.0.158:60601). May 17 00:18:09.601451 kubelet[4140]: E0517 00:18:09.601404 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:18:11.078501 sshd[9832]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:18:13.461924 sshd[9830]: PAM: Permission denied for root from 218.92.0.158 May 17 00:18:13.903539 sshd[9833]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:18:16.031133 sshd[9830]: PAM: Permission denied for root from 218.92.0.158 May 17 00:18:16.472939 sshd[9834]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:18:18.680654 sshd[9830]: PAM: Permission denied for root from 218.92.0.158 May 17 00:18:18.901700 sshd[9830]: Received disconnect from 218.92.0.158 port 60601:11: [preauth] May 17 00:18:18.901700 sshd[9830]: Disconnected from authenticating user root 218.92.0.158 port 60601 [preauth] May 17 00:18:18.903741 systemd[1]: sshd@10-147.28.151.230:22-218.92.0.158:60601.service: Deactivated successfully. May 17 00:18:19.601639 kubelet[4140]: E0517 00:18:19.601564 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:18:22.601225 kubelet[4140]: E0517 00:18:22.601170 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:18:30.601607 kubelet[4140]: E0517 00:18:30.601557 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:18:36.602078 kubelet[4140]: E0517 00:18:36.602003 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:18:44.601764 kubelet[4140]: E0517 00:18:44.601719 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:18:50.601447 kubelet[4140]: E0517 00:18:50.601396 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:18:55.601462 kubelet[4140]: E0517 00:18:55.601397 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:19:02.601722 kubelet[4140]: E0517 00:19:02.601679 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:19:07.601698 kubelet[4140]: E0517 00:19:07.601631 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:19:16.601422 kubelet[4140]: E0517 00:19:16.601357 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:19:19.600718 kubelet[4140]: E0517 00:19:19.600669 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:19:30.601125 kubelet[4140]: E0517 00:19:30.601081 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:19:32.600995 kubelet[4140]: E0517 00:19:32.600944 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:19:41.617036 systemd[1]: Started sshd@11-147.28.151.230:22-76.182.76.228:59364.service - OpenSSH per-connection server daemon (76.182.76.228:59364). May 17 00:19:42.601630 kubelet[4140]: E0517 00:19:42.601592 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:19:43.601538 kubelet[4140]: E0517 00:19:43.601504 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:19:55.601430 kubelet[4140]: E0517 00:19:55.601378 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:19:56.601342 kubelet[4140]: E0517 00:19:56.601316 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:20:06.601781 kubelet[4140]: E0517 00:20:06.601723 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:20:10.601757 containerd[2668]: time="2025-05-17T00:20:10.601660363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:20:10.781045 containerd[2668]: time="2025-05-17T00:20:10.780824593Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:20:10.782477 containerd[2668]: time="2025-05-17T00:20:10.782414676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:20:10.782566 containerd[2668]: time="2025-05-17T00:20:10.782482836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:20:10.782652 kubelet[4140]: E0517 00:20:10.782613 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:20:10.782922 kubelet[4140]: E0517 00:20:10.782663 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:20:10.782922 kubelet[4140]: E0517 00:20:10.782773 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jhx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-q5w5n_calico-system(c38caedc-750a-4223-9bb4-03b5231f627f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:20:10.783935 kubelet[4140]: E0517 00:20:10.783909 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:20:14.203999 systemd[1]: Started sshd@12-147.28.151.230:22-76.182.76.228:53900.service - OpenSSH per-connection server daemon (76.182.76.228:53900). May 17 00:20:20.601752 containerd[2668]: time="2025-05-17T00:20:20.601703074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:20:20.627035 containerd[2668]: time="2025-05-17T00:20:20.626975907Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:20:20.627267 containerd[2668]: time="2025-05-17T00:20:20.627241227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:20:20.627345 containerd[2668]: time="2025-05-17T00:20:20.627312147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:20:20.627473 kubelet[4140]: E0517 00:20:20.627422 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:20:20.627740 kubelet[4140]: E0517 00:20:20.627486 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:20:20.627740 kubelet[4140]: E0517 00:20:20.627603 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc7cef1fccdb484787957099af255476,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:20:20.629255 containerd[2668]: time="2025-05-17T00:20:20.629236590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:20:20.658652 containerd[2668]: time="2025-05-17T00:20:20.658606387Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:20:20.658873 containerd[2668]: time="2025-05-17T00:20:20.658843587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:20:20.658930 containerd[2668]: time="2025-05-17T00:20:20.658912708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:20:20.659037 kubelet[4140]: E0517 00:20:20.658999 4140 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:20:20.659079 kubelet[4140]: E0517 00:20:20.659047 4140 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:20:20.659183 kubelet[4140]: E0517 00:20:20.659149 4140 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b62v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7c95f6579-26ngv_calico-system(fe4f1f40-6407-495f-8d43-6209e12b2a8a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:20:20.660321 kubelet[4140]: E0517 00:20:20.660285 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:20:23.601797 kubelet[4140]: E0517 00:20:23.601758 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:20:28.253140 systemd[1]: Started sshd@13-147.28.151.230:22-218.92.0.158:25282.service - OpenSSH per-connection server daemon (218.92.0.158:25282). May 17 00:20:29.830355 sshd[10184]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:20:31.601946 kubelet[4140]: E0517 00:20:31.601904 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:20:31.962882 sshd[10182]: PAM: Permission denied for root from 218.92.0.158 May 17 00:20:32.386519 sshd[10185]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:20:34.263157 sshd[10182]: PAM: Permission denied for root from 218.92.0.158 May 17 00:20:34.687065 sshd[10207]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:20:36.839512 sshd[10182]: PAM: Permission denied for root from 218.92.0.158 May 17 00:20:37.051714 sshd[10182]: Received disconnect from 218.92.0.158 port 25282:11: [preauth] May 17 00:20:37.051714 sshd[10182]: Disconnected from authenticating user root 218.92.0.158 port 25282 [preauth] May 17 00:20:37.053874 systemd[1]: sshd@13-147.28.151.230:22-218.92.0.158:25282.service: Deactivated successfully. May 17 00:20:37.601213 kubelet[4140]: E0517 00:20:37.601154 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:20:43.601543 kubelet[4140]: E0517 00:20:43.601498 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:20:51.601308 kubelet[4140]: E0517 00:20:51.601212 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:20:56.156748 update_engine[2662]: I20250517 00:20:56.156690 2662 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:20:56.156748 update_engine[2662]: I20250517 00:20:56.156744 2662 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:20:56.157160 update_engine[2662]: I20250517 00:20:56.156962 2662 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:20:56.157310 update_engine[2662]: I20250517 00:20:56.157294 2662 omaha_request_params.cc:62] Current group set to lts May 17 00:20:56.157385 update_engine[2662]: I20250517 00:20:56.157372 2662 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:20:56.157406 update_engine[2662]: I20250517 00:20:56.157382 2662 update_attempter.cc:643] Scheduling an action processor start. May 17 00:20:56.157406 update_engine[2662]: I20250517 00:20:56.157397 2662 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:20:56.157451 update_engine[2662]: I20250517 00:20:56.157423 2662 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:20:56.157637 update_engine[2662]: I20250517 00:20:56.157476 2662 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:20:56.157637 update_engine[2662]: I20250517 00:20:56.157485 2662 omaha_request_action.cc:272] Request: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: May 17 00:20:56.157637 update_engine[2662]: I20250517 00:20:56.157491 2662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:20:56.157903 locksmithd[2697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:20:56.158444 update_engine[2662]: I20250517 00:20:56.158425 2662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:20:56.158704 update_engine[2662]: I20250517 00:20:56.158681 2662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:20:56.159412 update_engine[2662]: E20250517 00:20:56.159392 2662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:20:56.159457 update_engine[2662]: I20250517 00:20:56.159446 2662 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:20:56.601479 kubelet[4140]: E0517 00:20:56.601442 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:21:02.601668 kubelet[4140]: E0517 00:21:02.601623 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:21:06.147637 update_engine[2662]: I20250517 00:21:06.147481 2662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:21:06.147963 update_engine[2662]: I20250517 00:21:06.147761 2662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:21:06.147991 update_engine[2662]: I20250517 00:21:06.147955 2662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:21:06.148782 update_engine[2662]: E20250517 00:21:06.148753 2662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:21:06.148957 update_engine[2662]: I20250517 00:21:06.148927 2662 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:21:11.601680 kubelet[4140]: E0517 00:21:11.601639 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:21:14.600833 kubelet[4140]: E0517 00:21:14.600795 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:21:16.147061 update_engine[2662]: I20250517 00:21:16.146992 2662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:21:16.147461 update_engine[2662]: I20250517 00:21:16.147233 2662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:21:16.147461 update_engine[2662]: I20250517 00:21:16.147424 2662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:21:16.148164 update_engine[2662]: E20250517 00:21:16.148144 2662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:21:16.148198 update_engine[2662]: I20250517 00:21:16.148186 2662 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:21:25.601613 kubelet[4140]: E0517 00:21:25.601542 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:21:26.147306 update_engine[2662]: I20250517 00:21:26.147221 2662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:21:26.147614 update_engine[2662]: I20250517 00:21:26.147492 2662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:21:26.147726 update_engine[2662]: I20250517 00:21:26.147699 2662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:21:26.148607 update_engine[2662]: E20250517 00:21:26.148474 2662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:21:26.148607 update_engine[2662]: I20250517 00:21:26.148521 2662 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:21:26.148607 update_engine[2662]: I20250517 00:21:26.148528 2662 omaha_request_action.cc:617] Omaha request response: May 17 00:21:26.148607 update_engine[2662]: E20250517 00:21:26.148609 2662 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148626 2662 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148631 2662 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148636 2662 update_attempter.cc:306] Processing Done. May 17 00:21:26.148755 update_engine[2662]: E20250517 00:21:26.148650 2662 update_attempter.cc:619] Update failed. May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148655 2662 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148659 2662 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148665 2662 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148725 2662 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148744 2662 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148749 2662 omaha_request_action.cc:272] Request: May 17 00:21:26.148755 update_engine[2662]: May 17 00:21:26.148755 update_engine[2662]: May 17 00:21:26.148755 update_engine[2662]: May 17 00:21:26.148755 update_engine[2662]: May 17 00:21:26.148755 update_engine[2662]: May 17 00:21:26.148755 update_engine[2662]: May 17 00:21:26.148755 update_engine[2662]: I20250517 00:21:26.148754 2662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:21:26.149066 update_engine[2662]: I20250517 00:21:26.148869 2662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:21:26.149066 update_engine[2662]: I20250517 00:21:26.149010 2662 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:21:26.149106 locksmithd[2697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:21:26.149514 update_engine[2662]: E20250517 00:21:26.149495 2662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:21:26.149550 update_engine[2662]: I20250517 00:21:26.149533 2662 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:21:26.149550 update_engine[2662]: I20250517 00:21:26.149539 2662 omaha_request_action.cc:617] Omaha request response: May 17 00:21:26.149550 update_engine[2662]: I20250517 00:21:26.149544 2662 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:21:26.149621 update_engine[2662]: I20250517 00:21:26.149549 2662 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:21:26.149621 update_engine[2662]: I20250517 00:21:26.149554 2662 update_attempter.cc:306] Processing Done. May 17 00:21:26.149621 update_engine[2662]: I20250517 00:21:26.149559 2662 update_attempter.cc:310] Error event sent. May 17 00:21:26.149621 update_engine[2662]: I20250517 00:21:26.149566 2662 update_check_scheduler.cc:74] Next update check in 42m23s May 17 00:21:26.149721 locksmithd[2697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:21:28.600810 kubelet[4140]: E0517 00:21:28.600763 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:21:36.601840 kubelet[4140]: E0517 00:21:36.601794 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:21:41.631279 systemd[1]: sshd@11-147.28.151.230:22-76.182.76.228:59364.service: Deactivated successfully. May 17 00:21:42.601556 kubelet[4140]: E0517 00:21:42.601521 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:21:47.602079 kubelet[4140]: E0517 00:21:47.602010 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:21:53.999410 sshd[10176]: Connection reset by 76.182.76.228 port 53900 [preauth] May 17 00:21:54.001283 systemd[1]: sshd@12-147.28.151.230:22-76.182.76.228:53900.service: Deactivated successfully. May 17 00:21:54.601611 kubelet[4140]: E0517 00:21:54.601568 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:21:59.601678 kubelet[4140]: E0517 00:21:59.601624 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:22:07.601869 kubelet[4140]: E0517 00:22:07.601813 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:22:12.601822 kubelet[4140]: E0517 00:22:12.601770 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:22:20.601418 kubelet[4140]: E0517 00:22:20.601371 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:22:23.258206 systemd[1]: Started sshd@14-147.28.151.230:22-147.75.109.163:33386.service - OpenSSH per-connection server daemon (147.75.109.163:33386). May 17 00:22:23.674041 sshd[10505]: Accepted publickey for core from 147.75.109.163 port 33386 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:23.675175 sshd[10505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:23.678593 systemd-logind[2650]: New session 10 of user core. May 17 00:22:23.690747 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:22:24.030005 sshd[10505]: pam_unix(sshd:session): session closed for user core May 17 00:22:24.032875 systemd[1]: sshd@14-147.28.151.230:22-147.75.109.163:33386.service: Deactivated successfully. May 17 00:22:24.035361 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:22:24.035885 systemd-logind[2650]: Session 10 logged out. Waiting for processes to exit. May 17 00:22:24.036523 systemd-logind[2650]: Removed session 10. May 17 00:22:24.601156 kubelet[4140]: E0517 00:22:24.601107 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:22:29.103122 systemd[1]: Started sshd@15-147.28.151.230:22-147.75.109.163:47664.service - OpenSSH per-connection server daemon (147.75.109.163:47664). May 17 00:22:29.517708 sshd[10560]: Accepted publickey for core from 147.75.109.163 port 47664 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:29.518814 sshd[10560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:29.521979 systemd-logind[2650]: New session 11 of user core. May 17 00:22:29.533739 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:22:29.867381 sshd[10560]: pam_unix(sshd:session): session closed for user core May 17 00:22:29.870299 systemd[1]: sshd@15-147.28.151.230:22-147.75.109.163:47664.service: Deactivated successfully. May 17 00:22:29.871964 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:22:29.872476 systemd-logind[2650]: Session 11 logged out. Waiting for processes to exit. May 17 00:22:29.873065 systemd-logind[2650]: Removed session 11. May 17 00:22:29.954150 systemd[1]: Started sshd@16-147.28.151.230:22-147.75.109.163:47680.service - OpenSSH per-connection server daemon (147.75.109.163:47680). May 17 00:22:30.378912 sshd[10593]: Accepted publickey for core from 147.75.109.163 port 47680 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:30.380092 sshd[10593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:30.383083 systemd-logind[2650]: New session 12 of user core. May 17 00:22:30.395734 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:22:30.760968 sshd[10593]: pam_unix(sshd:session): session closed for user core May 17 00:22:30.763876 systemd[1]: sshd@16-147.28.151.230:22-147.75.109.163:47680.service: Deactivated successfully. May 17 00:22:30.765624 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:22:30.766146 systemd-logind[2650]: Session 12 logged out. Waiting for processes to exit. May 17 00:22:30.766739 systemd-logind[2650]: Removed session 12. May 17 00:22:30.834934 systemd[1]: Started sshd@17-147.28.151.230:22-147.75.109.163:47694.service - OpenSSH per-connection server daemon (147.75.109.163:47694). May 17 00:22:31.252367 sshd[10632]: Accepted publickey for core from 147.75.109.163 port 47694 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:31.253757 sshd[10632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:31.256702 systemd-logind[2650]: New session 13 of user core. May 17 00:22:31.266688 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:22:31.601701 kubelet[4140]: E0517 00:22:31.601657 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:22:31.604901 sshd[10632]: pam_unix(sshd:session): session closed for user core May 17 00:22:31.607826 systemd[1]: sshd@17-147.28.151.230:22-147.75.109.163:47694.service: Deactivated successfully. May 17 00:22:31.609489 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:22:31.610018 systemd-logind[2650]: Session 13 logged out. Waiting for processes to exit. May 17 00:22:31.610606 systemd-logind[2650]: Removed session 13. May 17 00:22:36.602686 kubelet[4140]: E0517 00:22:36.602630 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:22:36.680951 systemd[1]: Started sshd@18-147.28.151.230:22-147.75.109.163:47702.service - OpenSSH per-connection server daemon (147.75.109.163:47702). May 17 00:22:37.107664 sshd[10732]: Accepted publickey for core from 147.75.109.163 port 47702 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:37.108753 sshd[10732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:37.111654 systemd-logind[2650]: New session 14 of user core. May 17 00:22:37.123682 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:22:37.467760 sshd[10732]: pam_unix(sshd:session): session closed for user core May 17 00:22:37.470456 systemd[1]: sshd@18-147.28.151.230:22-147.75.109.163:47702.service: Deactivated successfully. May 17 00:22:37.472727 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:22:37.473238 systemd-logind[2650]: Session 14 logged out. Waiting for processes to exit. May 17 00:22:37.473790 systemd-logind[2650]: Removed session 14. May 17 00:22:37.538951 systemd[1]: Started sshd@19-147.28.151.230:22-147.75.109.163:47708.service - OpenSSH per-connection server daemon (147.75.109.163:47708). May 17 00:22:37.956557 sshd[10767]: Accepted publickey for core from 147.75.109.163 port 47708 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:37.957898 sshd[10767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:37.961007 systemd-logind[2650]: New session 15 of user core. May 17 00:22:37.980682 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:22:38.509660 sshd[10767]: pam_unix(sshd:session): session closed for user core May 17 00:22:38.512540 systemd[1]: sshd@19-147.28.151.230:22-147.75.109.163:47708.service: Deactivated successfully. May 17 00:22:38.514796 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:22:38.515286 systemd-logind[2650]: Session 15 logged out. Waiting for processes to exit. May 17 00:22:38.515873 systemd-logind[2650]: Removed session 15. May 17 00:22:38.584894 systemd[1]: Started sshd@20-147.28.151.230:22-147.75.109.163:49670.service - OpenSSH per-connection server daemon (147.75.109.163:49670). May 17 00:22:39.009097 sshd[10803]: Accepted publickey for core from 147.75.109.163 port 49670 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:39.010352 sshd[10803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:39.013599 systemd-logind[2650]: New session 16 of user core. May 17 00:22:39.025692 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:22:40.525215 sshd[10803]: pam_unix(sshd:session): session closed for user core May 17 00:22:40.528073 systemd[1]: sshd@20-147.28.151.230:22-147.75.109.163:49670.service: Deactivated successfully. May 17 00:22:40.529751 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:22:40.529945 systemd[1]: session-16.scope: Consumed 4.235s CPU time. May 17 00:22:40.530286 systemd-logind[2650]: Session 16 logged out. Waiting for processes to exit. May 17 00:22:40.530888 systemd-logind[2650]: Removed session 16. May 17 00:22:40.600854 systemd[1]: Started sshd@21-147.28.151.230:22-147.75.109.163:49682.service - OpenSSH per-connection server daemon (147.75.109.163:49682). May 17 00:22:41.015106 sshd[10900]: Accepted publickey for core from 147.75.109.163 port 49682 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:41.016219 sshd[10900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:41.019363 systemd-logind[2650]: New session 17 of user core. May 17 00:22:41.032684 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:22:41.455112 sshd[10900]: pam_unix(sshd:session): session closed for user core May 17 00:22:41.458087 systemd[1]: sshd@21-147.28.151.230:22-147.75.109.163:49682.service: Deactivated successfully. May 17 00:22:41.459742 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:22:41.460219 systemd-logind[2650]: Session 17 logged out. Waiting for processes to exit. May 17 00:22:41.460802 systemd-logind[2650]: Removed session 17. May 17 00:22:41.528962 systemd[1]: Started sshd@22-147.28.151.230:22-147.75.109.163:49698.service - OpenSSH per-connection server daemon (147.75.109.163:49698). May 17 00:22:41.954352 sshd[10948]: Accepted publickey for core from 147.75.109.163 port 49698 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:41.955549 sshd[10948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:41.958573 systemd-logind[2650]: New session 18 of user core. May 17 00:22:41.969682 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:22:42.310171 sshd[10948]: pam_unix(sshd:session): session closed for user core May 17 00:22:42.313252 systemd[1]: sshd@22-147.28.151.230:22-147.75.109.163:49698.service: Deactivated successfully. May 17 00:22:42.314901 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:22:42.315408 systemd-logind[2650]: Session 18 logged out. Waiting for processes to exit. May 17 00:22:42.316016 systemd-logind[2650]: Removed session 18. May 17 00:22:46.601181 kubelet[4140]: E0517 00:22:46.601106 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-q5w5n" podUID="c38caedc-750a-4223-9bb4-03b5231f627f" May 17 00:22:47.381996 systemd[1]: Started sshd@23-147.28.151.230:22-147.75.109.163:49700.service - OpenSSH per-connection server daemon (147.75.109.163:49700). May 17 00:22:47.799271 sshd[11009]: Accepted publickey for core from 147.75.109.163 port 49700 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:47.800408 sshd[11009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:47.803372 systemd-logind[2650]: New session 19 of user core. May 17 00:22:47.820696 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:22:48.152456 sshd[11009]: pam_unix(sshd:session): session closed for user core May 17 00:22:48.155459 systemd[1]: sshd@23-147.28.151.230:22-147.75.109.163:49700.service: Deactivated successfully. May 17 00:22:48.157704 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:22:48.158213 systemd-logind[2650]: Session 19 logged out. Waiting for processes to exit. May 17 00:22:48.158802 systemd-logind[2650]: Removed session 19. May 17 00:22:49.601585 kubelet[4140]: E0517 00:22:49.601540 4140 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7c95f6579-26ngv" podUID="fe4f1f40-6407-495f-8d43-6209e12b2a8a" May 17 00:22:51.672089 systemd[1]: Started sshd@24-147.28.151.230:22-218.92.0.158:16157.service - OpenSSH per-connection server daemon (218.92.0.158:16157). May 17 00:22:53.229082 systemd[1]: Started sshd@25-147.28.151.230:22-147.75.109.163:43326.service - OpenSSH per-connection server daemon (147.75.109.163:43326). May 17 00:22:53.321475 sshd[11054]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:22:53.643285 sshd[11056]: Accepted publickey for core from 147.75.109.163 port 43326 ssh2: RSA SHA256:HjlRaoSCSzBgUs3ArDGbneXnnLI9eUaSLP/NChIjXBo May 17 00:22:53.644370 sshd[11056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:53.647455 systemd-logind[2650]: New session 20 of user core. May 17 00:22:53.656679 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:22:53.991448 sshd[11056]: pam_unix(sshd:session): session closed for user core May 17 00:22:53.994321 systemd[1]: sshd@25-147.28.151.230:22-147.75.109.163:43326.service: Deactivated successfully. May 17 00:22:53.995956 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:22:53.996459 systemd-logind[2650]: Session 20 logged out. Waiting for processes to exit. May 17 00:22:53.997040 systemd-logind[2650]: Removed session 20.