Feb 14 01:46:24.169486 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Feb 14 01:46:24.169509 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 14 01:46:24.169517 kernel: KASLR enabled Feb 14 01:46:24.169523 kernel: efi: EFI v2.7 by American Megatrends Feb 14 01:46:24.169529 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf00018 MEMRESERVE=0xe45e8f98 Feb 14 01:46:24.169535 kernel: random: crng init done Feb 14 01:46:24.169543 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. Feb 14 01:46:24.169549 kernel: ACPI: Early table checksum verification disabled Feb 14 01:46:24.169557 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Feb 14 01:46:24.169563 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Feb 14 01:46:24.169569 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Feb 14 01:46:24.169576 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Feb 14 01:46:24.169582 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Feb 14 01:46:24.169588 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Feb 14 01:46:24.169597 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Feb 14 01:46:24.169603 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 01:46:24.169610 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Feb 14 01:46:24.169617 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 01:46:24.169623 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Feb 14 01:46:24.169629 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169636 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169643 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169649 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169657 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Feb 14 01:46:24.169664 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Feb 14 01:46:24.169670 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 01:46:24.169677 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Feb 14 01:46:24.169684 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Feb 14 01:46:24.169690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Feb 14 01:46:24.169697 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Feb 14 01:46:24.169703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Feb 14 01:46:24.169710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Feb 14 01:46:24.169716 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] Feb 14 01:46:24.169723 kernel: Zone ranges: Feb 14 01:46:24.169729 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Feb 14 01:46:24.169737 kernel: DMA32 empty Feb 14 01:46:24.169744 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Feb 14 01:46:24.169750 kernel: Movable zone start for each node Feb 14 01:46:24.169757 kernel: Early memory node ranges Feb 14 01:46:24.169763 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Feb 14 01:46:24.169773 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Feb 14 01:46:24.169779 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Feb 14 01:46:24.169788 kernel: node 0: [mem 0x0000000094000000-0x00000000eba37fff] Feb 14 01:46:24.169794 kernel: node 0: [mem 0x00000000eba38000-0x00000000ebeccfff] Feb 14 01:46:24.169801 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Feb 14 01:46:24.169808 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Feb 14 01:46:24.169815 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Feb 14 01:46:24.169822 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Feb 14 01:46:24.169828 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] Feb 14 01:46:24.169835 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] Feb 14 01:46:24.169842 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Feb 14 01:46:24.169849 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Feb 14 01:46:24.169857 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Feb 14 01:46:24.169864 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Feb 14 01:46:24.169871 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Feb 14 01:46:24.169877 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Feb 14 01:46:24.169884 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Feb 14 01:46:24.169891 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Feb 14 01:46:24.169898 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Feb 14 01:46:24.169905 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Feb 14 01:46:24.169912 kernel: psci: probing for conduit method from ACPI. Feb 14 01:46:24.169919 kernel: psci: PSCIv1.1 detected in firmware. Feb 14 01:46:24.169926 kernel: psci: Using standard PSCI v0.2 function IDs Feb 14 01:46:24.169934 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 14 01:46:24.169941 kernel: psci: SMC Calling Convention v1.2 Feb 14 01:46:24.169948 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 14 01:46:24.169955 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Feb 14 01:46:24.169962 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Feb 14 01:46:24.169969 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Feb 14 01:46:24.169976 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Feb 14 01:46:24.169982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Feb 14 01:46:24.169989 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Feb 14 01:46:24.169996 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Feb 14 01:46:24.170003 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Feb 14 01:46:24.170010 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Feb 14 01:46:24.170018 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Feb 14 01:46:24.170025 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Feb 14 01:46:24.170032 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Feb 14 01:46:24.170039 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Feb 14 01:46:24.170045 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Feb 14 01:46:24.170052 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Feb 14 01:46:24.170059 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Feb 14 01:46:24.170066 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Feb 14 01:46:24.170073 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Feb 14 01:46:24.170080 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Feb 14 01:46:24.170087 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Feb 14 01:46:24.170093 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Feb 14 01:46:24.170102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Feb 14 01:46:24.170108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Feb 14 01:46:24.170115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Feb 14 01:46:24.170122 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Feb 14 01:46:24.170129 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Feb 14 01:46:24.170136 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Feb 14 01:46:24.170143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Feb 14 01:46:24.170149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Feb 14 01:46:24.170156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Feb 14 01:46:24.170163 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Feb 14 01:46:24.170170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Feb 14 01:46:24.170182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Feb 14 01:46:24.170190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Feb 14 01:46:24.170196 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Feb 14 01:46:24.170203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Feb 14 01:46:24.170210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Feb 14 01:46:24.170217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Feb 14 01:46:24.170224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Feb 14 01:46:24.170230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Feb 14 01:46:24.170237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Feb 14 01:46:24.170244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Feb 14 01:46:24.170251 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Feb 14 01:46:24.170258 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Feb 14 01:46:24.170266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Feb 14 01:46:24.170273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Feb 14 01:46:24.170280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Feb 14 01:46:24.170287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Feb 14 01:46:24.170294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Feb 14 01:46:24.170301 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Feb 14 01:46:24.170307 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Feb 14 01:46:24.170314 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Feb 14 01:46:24.170328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Feb 14 01:46:24.170335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Feb 14 01:46:24.170344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Feb 14 01:46:24.170351 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Feb 14 01:46:24.170358 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Feb 14 01:46:24.170366 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Feb 14 01:46:24.170373 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Feb 14 01:46:24.170380 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Feb 14 01:46:24.170389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Feb 14 01:46:24.170396 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Feb 14 01:46:24.170403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Feb 14 01:46:24.170411 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Feb 14 01:46:24.170418 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Feb 14 01:46:24.170425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Feb 14 01:46:24.170432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Feb 14 01:46:24.170440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Feb 14 01:46:24.170447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Feb 14 01:46:24.170454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Feb 14 01:46:24.170461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Feb 14 01:46:24.170468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Feb 14 01:46:24.170477 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Feb 14 01:46:24.170484 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Feb 14 01:46:24.170492 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Feb 14 01:46:24.170499 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Feb 14 01:46:24.170506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Feb 14 01:46:24.170513 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Feb 14 01:46:24.170520 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Feb 14 01:46:24.170527 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 14 01:46:24.170535 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 14 01:46:24.170542 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Feb 14 01:46:24.170549 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Feb 14 01:46:24.170558 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Feb 14 01:46:24.170565 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Feb 14 01:46:24.170573 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Feb 14 01:46:24.170580 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Feb 14 01:46:24.170587 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Feb 14 01:46:24.170594 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Feb 14 01:46:24.170601 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Feb 14 01:46:24.170608 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Feb 14 01:46:24.170615 kernel: Detected PIPT I-cache on CPU0 Feb 14 01:46:24.170622 kernel: CPU features: detected: GIC system register CPU interface Feb 14 01:46:24.170630 kernel: CPU features: detected: Virtualization Host Extensions Feb 14 01:46:24.170639 kernel: CPU features: detected: Hardware dirty bit management Feb 14 01:46:24.170646 kernel: CPU features: detected: Spectre-v4 Feb 14 01:46:24.170653 kernel: CPU features: detected: Spectre-BHB Feb 14 01:46:24.170661 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 14 01:46:24.170668 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 14 01:46:24.170675 kernel: CPU features: detected: ARM erratum 1418040 Feb 14 01:46:24.170682 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 14 01:46:24.170690 kernel: alternatives: applying boot alternatives Feb 14 01:46:24.170699 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 01:46:24.170706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 01:46:24.170715 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 14 01:46:24.170722 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Feb 14 01:46:24.170729 kernel: printk: log_buf_len min size: 262144 bytes Feb 14 01:46:24.170737 kernel: printk: log_buf_len: 1048576 bytes Feb 14 01:46:24.170744 kernel: printk: early log buf free: 250032(95%) Feb 14 01:46:24.170751 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Feb 14 01:46:24.170758 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Feb 14 01:46:24.170766 kernel: Fallback order for Node 0: 0 Feb 14 01:46:24.170773 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Feb 14 01:46:24.170780 kernel: Policy zone: Normal Feb 14 01:46:24.170787 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 01:46:24.170794 kernel: software IO TLB: area num 128. Feb 14 01:46:24.170803 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Feb 14 01:46:24.170810 kernel: Memory: 262922520K/268174336K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 5251816K reserved, 0K cma-reserved) Feb 14 01:46:24.170818 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Feb 14 01:46:24.170825 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 01:46:24.170833 kernel: rcu: RCU event tracing is enabled. Feb 14 01:46:24.170840 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Feb 14 01:46:24.170848 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 01:46:24.170855 kernel: Tracing variant of Tasks RCU enabled. Feb 14 01:46:24.170863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 01:46:24.170870 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Feb 14 01:46:24.170877 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 14 01:46:24.170886 kernel: GICv3: GIC: Using split EOI/Deactivate mode Feb 14 01:46:24.170893 kernel: GICv3: 672 SPIs implemented Feb 14 01:46:24.170901 kernel: GICv3: 0 Extended SPIs implemented Feb 14 01:46:24.170908 kernel: Root IRQ handler: gic_handle_irq Feb 14 01:46:24.170915 kernel: GICv3: GICv3 features: 16 PPIs Feb 14 01:46:24.170922 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Feb 14 01:46:24.170930 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Feb 14 01:46:24.170937 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Feb 14 01:46:24.170944 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Feb 14 01:46:24.170951 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Feb 14 01:46:24.170958 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Feb 14 01:46:24.170965 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Feb 14 01:46:24.170973 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Feb 14 01:46:24.170981 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Feb 14 01:46:24.170988 kernel: ITS [mem 0x100100040000-0x10010005ffff] Feb 14 01:46:24.170996 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171003 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171011 kernel: ITS [mem 0x100100060000-0x10010007ffff] Feb 14 01:46:24.171018 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171026 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171033 kernel: ITS [mem 0x100100080000-0x10010009ffff] Feb 14 01:46:24.171041 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171048 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171055 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Feb 14 01:46:24.171064 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171072 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171079 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Feb 14 01:46:24.171086 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171094 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171101 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Feb 14 01:46:24.171109 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171116 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171123 kernel: ITS [mem 0x100100100000-0x10010011ffff] Feb 14 01:46:24.171131 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171138 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171147 kernel: ITS [mem 0x100100120000-0x10010013ffff] Feb 14 01:46:24.171154 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171162 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171169 kernel: GICv3: using LPI property table @0x00000800003e0000 Feb 14 01:46:24.171176 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Feb 14 01:46:24.171185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 01:46:24.171193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171200 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Feb 14 01:46:24.171208 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Feb 14 01:46:24.171215 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 14 01:46:24.171223 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 14 01:46:24.171232 kernel: Console: colour dummy device 80x25 Feb 14 01:46:24.171239 kernel: printk: console [tty0] enabled Feb 14 01:46:24.171247 kernel: ACPI: Core revision 20230628 Feb 14 01:46:24.171254 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 14 01:46:24.171262 kernel: pid_max: default: 81920 minimum: 640 Feb 14 01:46:24.171269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 01:46:24.171277 kernel: landlock: Up and running. Feb 14 01:46:24.171284 kernel: SELinux: Initializing. Feb 14 01:46:24.171292 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.171300 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.171309 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 01:46:24.171316 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 01:46:24.171324 kernel: rcu: Hierarchical SRCU implementation. Feb 14 01:46:24.171331 kernel: rcu: Max phase no-delay instances is 400. Feb 14 01:46:24.171339 kernel: Platform MSI: ITS@0x100100040000 domain created Feb 14 01:46:24.171346 kernel: Platform MSI: ITS@0x100100060000 domain created Feb 14 01:46:24.171354 kernel: Platform MSI: ITS@0x100100080000 domain created Feb 14 01:46:24.171361 kernel: Platform MSI: ITS@0x1001000a0000 domain created Feb 14 01:46:24.171370 kernel: Platform MSI: ITS@0x1001000c0000 domain created Feb 14 01:46:24.171377 kernel: Platform MSI: ITS@0x1001000e0000 domain created Feb 14 01:46:24.171385 kernel: Platform MSI: ITS@0x100100100000 domain created Feb 14 01:46:24.171392 kernel: Platform MSI: ITS@0x100100120000 domain created Feb 14 01:46:24.171399 kernel: PCI/MSI: ITS@0x100100040000 domain created Feb 14 01:46:24.171407 kernel: PCI/MSI: ITS@0x100100060000 domain created Feb 14 01:46:24.171414 kernel: PCI/MSI: ITS@0x100100080000 domain created Feb 14 01:46:24.171422 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Feb 14 01:46:24.171429 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Feb 14 01:46:24.171437 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Feb 14 01:46:24.171445 kernel: PCI/MSI: ITS@0x100100100000 domain created Feb 14 01:46:24.171453 kernel: PCI/MSI: ITS@0x100100120000 domain created Feb 14 01:46:24.171460 kernel: Remapping and enabling EFI services. Feb 14 01:46:24.171467 kernel: smp: Bringing up secondary CPUs ... Feb 14 01:46:24.171475 kernel: Detected PIPT I-cache on CPU1 Feb 14 01:46:24.171483 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Feb 14 01:46:24.171490 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Feb 14 01:46:24.171498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171505 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Feb 14 01:46:24.171515 kernel: Detected PIPT I-cache on CPU2 Feb 14 01:46:24.171523 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Feb 14 01:46:24.171530 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Feb 14 01:46:24.171538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171545 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Feb 14 01:46:24.171552 kernel: Detected PIPT I-cache on CPU3 Feb 14 01:46:24.171560 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Feb 14 01:46:24.171567 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Feb 14 01:46:24.171575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171582 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Feb 14 01:46:24.171591 kernel: Detected PIPT I-cache on CPU4 Feb 14 01:46:24.171598 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Feb 14 01:46:24.171606 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Feb 14 01:46:24.171613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171620 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Feb 14 01:46:24.171627 kernel: Detected PIPT I-cache on CPU5 Feb 14 01:46:24.171635 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Feb 14 01:46:24.171642 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Feb 14 01:46:24.171650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171658 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Feb 14 01:46:24.171666 kernel: Detected PIPT I-cache on CPU6 Feb 14 01:46:24.171674 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Feb 14 01:46:24.171681 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Feb 14 01:46:24.171688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171696 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Feb 14 01:46:24.171703 kernel: Detected PIPT I-cache on CPU7 Feb 14 01:46:24.171710 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Feb 14 01:46:24.171718 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Feb 14 01:46:24.171727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171734 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Feb 14 01:46:24.171741 kernel: Detected PIPT I-cache on CPU8 Feb 14 01:46:24.171749 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Feb 14 01:46:24.171756 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Feb 14 01:46:24.171764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171771 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Feb 14 01:46:24.171778 kernel: Detected PIPT I-cache on CPU9 Feb 14 01:46:24.171786 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Feb 14 01:46:24.171793 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Feb 14 01:46:24.171802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171810 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Feb 14 01:46:24.171817 kernel: Detected PIPT I-cache on CPU10 Feb 14 01:46:24.171825 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Feb 14 01:46:24.171832 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Feb 14 01:46:24.171839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171847 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Feb 14 01:46:24.171854 kernel: Detected PIPT I-cache on CPU11 Feb 14 01:46:24.171862 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Feb 14 01:46:24.171869 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Feb 14 01:46:24.171878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171886 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Feb 14 01:46:24.171893 kernel: Detected PIPT I-cache on CPU12 Feb 14 01:46:24.171900 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Feb 14 01:46:24.171908 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Feb 14 01:46:24.171915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171922 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Feb 14 01:46:24.171930 kernel: Detected PIPT I-cache on CPU13 Feb 14 01:46:24.171937 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Feb 14 01:46:24.171946 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Feb 14 01:46:24.171953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171961 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Feb 14 01:46:24.171968 kernel: Detected PIPT I-cache on CPU14 Feb 14 01:46:24.171976 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Feb 14 01:46:24.171983 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Feb 14 01:46:24.171991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171998 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Feb 14 01:46:24.172005 kernel: Detected PIPT I-cache on CPU15 Feb 14 01:46:24.172014 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Feb 14 01:46:24.172022 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Feb 14 01:46:24.172029 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172037 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Feb 14 01:46:24.172044 kernel: Detected PIPT I-cache on CPU16 Feb 14 01:46:24.172052 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Feb 14 01:46:24.172059 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Feb 14 01:46:24.172066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172074 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Feb 14 01:46:24.172081 kernel: Detected PIPT I-cache on CPU17 Feb 14 01:46:24.172098 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Feb 14 01:46:24.172107 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Feb 14 01:46:24.172115 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172123 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Feb 14 01:46:24.172130 kernel: Detected PIPT I-cache on CPU18 Feb 14 01:46:24.172138 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Feb 14 01:46:24.172146 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Feb 14 01:46:24.172154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172161 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Feb 14 01:46:24.172171 kernel: Detected PIPT I-cache on CPU19 Feb 14 01:46:24.172180 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Feb 14 01:46:24.172189 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Feb 14 01:46:24.172196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172204 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Feb 14 01:46:24.172213 kernel: Detected PIPT I-cache on CPU20 Feb 14 01:46:24.172221 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Feb 14 01:46:24.172231 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Feb 14 01:46:24.172239 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172246 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Feb 14 01:46:24.172254 kernel: Detected PIPT I-cache on CPU21 Feb 14 01:46:24.172263 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Feb 14 01:46:24.172271 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Feb 14 01:46:24.172279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172287 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Feb 14 01:46:24.172296 kernel: Detected PIPT I-cache on CPU22 Feb 14 01:46:24.172304 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Feb 14 01:46:24.172311 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Feb 14 01:46:24.172319 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172327 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Feb 14 01:46:24.172335 kernel: Detected PIPT I-cache on CPU23 Feb 14 01:46:24.172342 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Feb 14 01:46:24.172350 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Feb 14 01:46:24.172358 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172366 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Feb 14 01:46:24.172375 kernel: Detected PIPT I-cache on CPU24 Feb 14 01:46:24.172383 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Feb 14 01:46:24.172391 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Feb 14 01:46:24.172398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172406 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Feb 14 01:46:24.172414 kernel: Detected PIPT I-cache on CPU25 Feb 14 01:46:24.172422 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Feb 14 01:46:24.172429 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Feb 14 01:46:24.172437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172446 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Feb 14 01:46:24.172454 kernel: Detected PIPT I-cache on CPU26 Feb 14 01:46:24.172462 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Feb 14 01:46:24.172470 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Feb 14 01:46:24.172477 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172485 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Feb 14 01:46:24.172493 kernel: Detected PIPT I-cache on CPU27 Feb 14 01:46:24.172501 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Feb 14 01:46:24.172509 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Feb 14 01:46:24.172517 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172526 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Feb 14 01:46:24.172533 kernel: Detected PIPT I-cache on CPU28 Feb 14 01:46:24.172541 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Feb 14 01:46:24.172549 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Feb 14 01:46:24.172557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172564 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Feb 14 01:46:24.172572 kernel: Detected PIPT I-cache on CPU29 Feb 14 01:46:24.172580 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Feb 14 01:46:24.172588 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Feb 14 01:46:24.172597 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172605 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Feb 14 01:46:24.172613 kernel: Detected PIPT I-cache on CPU30 Feb 14 01:46:24.172620 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Feb 14 01:46:24.172628 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Feb 14 01:46:24.172636 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172644 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Feb 14 01:46:24.172651 kernel: Detected PIPT I-cache on CPU31 Feb 14 01:46:24.172659 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Feb 14 01:46:24.172667 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Feb 14 01:46:24.172676 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172684 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Feb 14 01:46:24.172692 kernel: Detected PIPT I-cache on CPU32 Feb 14 01:46:24.172700 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Feb 14 01:46:24.172707 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Feb 14 01:46:24.172715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172723 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Feb 14 01:46:24.172731 kernel: Detected PIPT I-cache on CPU33 Feb 14 01:46:24.172738 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Feb 14 01:46:24.172748 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Feb 14 01:46:24.172756 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172763 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Feb 14 01:46:24.172771 kernel: Detected PIPT I-cache on CPU34 Feb 14 01:46:24.172779 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Feb 14 01:46:24.172787 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Feb 14 01:46:24.172796 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172803 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Feb 14 01:46:24.172811 kernel: Detected PIPT I-cache on CPU35 Feb 14 01:46:24.172819 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Feb 14 01:46:24.172828 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Feb 14 01:46:24.172836 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172844 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Feb 14 01:46:24.172852 kernel: Detected PIPT I-cache on CPU36 Feb 14 01:46:24.172859 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Feb 14 01:46:24.172867 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Feb 14 01:46:24.172875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172883 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Feb 14 01:46:24.172891 kernel: Detected PIPT I-cache on CPU37 Feb 14 01:46:24.172900 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Feb 14 01:46:24.172908 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Feb 14 01:46:24.172916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172923 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Feb 14 01:46:24.172931 kernel: Detected PIPT I-cache on CPU38 Feb 14 01:46:24.172939 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Feb 14 01:46:24.172946 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Feb 14 01:46:24.172954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172962 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Feb 14 01:46:24.172969 kernel: Detected PIPT I-cache on CPU39 Feb 14 01:46:24.172979 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Feb 14 01:46:24.172986 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Feb 14 01:46:24.172994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173002 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Feb 14 01:46:24.173010 kernel: Detected PIPT I-cache on CPU40 Feb 14 01:46:24.173017 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Feb 14 01:46:24.173025 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Feb 14 01:46:24.173034 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173042 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Feb 14 01:46:24.173050 kernel: Detected PIPT I-cache on CPU41 Feb 14 01:46:24.173058 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Feb 14 01:46:24.173065 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Feb 14 01:46:24.173073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173081 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Feb 14 01:46:24.173089 kernel: Detected PIPT I-cache on CPU42 Feb 14 01:46:24.173096 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Feb 14 01:46:24.173104 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Feb 14 01:46:24.173113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173121 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Feb 14 01:46:24.173129 kernel: Detected PIPT I-cache on CPU43 Feb 14 01:46:24.173136 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Feb 14 01:46:24.173144 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Feb 14 01:46:24.173152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173159 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Feb 14 01:46:24.173167 kernel: Detected PIPT I-cache on CPU44 Feb 14 01:46:24.173175 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Feb 14 01:46:24.173186 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Feb 14 01:46:24.173194 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173202 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Feb 14 01:46:24.173210 kernel: Detected PIPT I-cache on CPU45 Feb 14 01:46:24.173218 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Feb 14 01:46:24.173226 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Feb 14 01:46:24.173234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173241 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Feb 14 01:46:24.173249 kernel: Detected PIPT I-cache on CPU46 Feb 14 01:46:24.173257 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Feb 14 01:46:24.173268 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Feb 14 01:46:24.173276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173283 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Feb 14 01:46:24.173291 kernel: Detected PIPT I-cache on CPU47 Feb 14 01:46:24.173299 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Feb 14 01:46:24.173307 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Feb 14 01:46:24.173315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173322 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Feb 14 01:46:24.173330 kernel: Detected PIPT I-cache on CPU48 Feb 14 01:46:24.173339 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Feb 14 01:46:24.173347 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Feb 14 01:46:24.173355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173363 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Feb 14 01:46:24.173370 kernel: Detected PIPT I-cache on CPU49 Feb 14 01:46:24.173378 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Feb 14 01:46:24.173386 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Feb 14 01:46:24.173394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173401 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Feb 14 01:46:24.173409 kernel: Detected PIPT I-cache on CPU50 Feb 14 01:46:24.173418 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Feb 14 01:46:24.173426 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Feb 14 01:46:24.173433 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173441 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Feb 14 01:46:24.173449 kernel: Detected PIPT I-cache on CPU51 Feb 14 01:46:24.173456 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Feb 14 01:46:24.173464 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Feb 14 01:46:24.173472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173479 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Feb 14 01:46:24.173489 kernel: Detected PIPT I-cache on CPU52 Feb 14 01:46:24.173496 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Feb 14 01:46:24.173504 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Feb 14 01:46:24.173512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173519 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Feb 14 01:46:24.173527 kernel: Detected PIPT I-cache on CPU53 Feb 14 01:46:24.173536 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Feb 14 01:46:24.173544 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Feb 14 01:46:24.173552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173560 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Feb 14 01:46:24.173569 kernel: Detected PIPT I-cache on CPU54 Feb 14 01:46:24.173576 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Feb 14 01:46:24.173585 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Feb 14 01:46:24.173592 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173600 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Feb 14 01:46:24.173608 kernel: Detected PIPT I-cache on CPU55 Feb 14 01:46:24.173615 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Feb 14 01:46:24.173623 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Feb 14 01:46:24.173631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173640 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Feb 14 01:46:24.173648 kernel: Detected PIPT I-cache on CPU56 Feb 14 01:46:24.173656 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Feb 14 01:46:24.173663 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Feb 14 01:46:24.173671 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173679 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Feb 14 01:46:24.173687 kernel: Detected PIPT I-cache on CPU57 Feb 14 01:46:24.173695 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Feb 14 01:46:24.173702 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Feb 14 01:46:24.173712 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173719 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Feb 14 01:46:24.173727 kernel: Detected PIPT I-cache on CPU58 Feb 14 01:46:24.173735 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Feb 14 01:46:24.173743 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Feb 14 01:46:24.173751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173758 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Feb 14 01:46:24.173766 kernel: Detected PIPT I-cache on CPU59 Feb 14 01:46:24.173774 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Feb 14 01:46:24.173781 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Feb 14 01:46:24.173790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173798 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Feb 14 01:46:24.173806 kernel: Detected PIPT I-cache on CPU60 Feb 14 01:46:24.173814 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Feb 14 01:46:24.173822 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Feb 14 01:46:24.173829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173837 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Feb 14 01:46:24.173845 kernel: Detected PIPT I-cache on CPU61 Feb 14 01:46:24.173853 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Feb 14 01:46:24.173862 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Feb 14 01:46:24.173870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173878 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Feb 14 01:46:24.173885 kernel: Detected PIPT I-cache on CPU62 Feb 14 01:46:24.173893 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Feb 14 01:46:24.173901 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Feb 14 01:46:24.173908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173916 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Feb 14 01:46:24.173924 kernel: Detected PIPT I-cache on CPU63 Feb 14 01:46:24.173932 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Feb 14 01:46:24.173941 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Feb 14 01:46:24.173949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173957 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Feb 14 01:46:24.173964 kernel: Detected PIPT I-cache on CPU64 Feb 14 01:46:24.173972 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Feb 14 01:46:24.173980 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Feb 14 01:46:24.173988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173995 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Feb 14 01:46:24.174003 kernel: Detected PIPT I-cache on CPU65 Feb 14 01:46:24.174012 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Feb 14 01:46:24.174020 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Feb 14 01:46:24.174028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174036 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Feb 14 01:46:24.174043 kernel: Detected PIPT I-cache on CPU66 Feb 14 01:46:24.174051 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Feb 14 01:46:24.174059 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Feb 14 01:46:24.174067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174074 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Feb 14 01:46:24.174082 kernel: Detected PIPT I-cache on CPU67 Feb 14 01:46:24.174092 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Feb 14 01:46:24.174100 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Feb 14 01:46:24.174107 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174115 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Feb 14 01:46:24.174123 kernel: Detected PIPT I-cache on CPU68 Feb 14 01:46:24.174131 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Feb 14 01:46:24.174138 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Feb 14 01:46:24.174146 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174154 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Feb 14 01:46:24.174163 kernel: Detected PIPT I-cache on CPU69 Feb 14 01:46:24.174171 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Feb 14 01:46:24.174181 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Feb 14 01:46:24.174189 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174197 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Feb 14 01:46:24.174205 kernel: Detected PIPT I-cache on CPU70 Feb 14 01:46:24.174213 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Feb 14 01:46:24.174221 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Feb 14 01:46:24.174228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174236 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Feb 14 01:46:24.174245 kernel: Detected PIPT I-cache on CPU71 Feb 14 01:46:24.174253 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Feb 14 01:46:24.174261 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Feb 14 01:46:24.174269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174276 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Feb 14 01:46:24.174284 kernel: Detected PIPT I-cache on CPU72 Feb 14 01:46:24.174292 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Feb 14 01:46:24.174300 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Feb 14 01:46:24.174307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174316 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Feb 14 01:46:24.174324 kernel: Detected PIPT I-cache on CPU73 Feb 14 01:46:24.174332 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Feb 14 01:46:24.174339 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Feb 14 01:46:24.174347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174355 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Feb 14 01:46:24.174362 kernel: Detected PIPT I-cache on CPU74 Feb 14 01:46:24.174370 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Feb 14 01:46:24.174378 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Feb 14 01:46:24.174387 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174395 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Feb 14 01:46:24.174403 kernel: Detected PIPT I-cache on CPU75 Feb 14 01:46:24.174410 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Feb 14 01:46:24.174418 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Feb 14 01:46:24.174426 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174434 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Feb 14 01:46:24.174442 kernel: Detected PIPT I-cache on CPU76 Feb 14 01:46:24.174449 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Feb 14 01:46:24.174457 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Feb 14 01:46:24.174467 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174475 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Feb 14 01:46:24.174482 kernel: Detected PIPT I-cache on CPU77 Feb 14 01:46:24.174490 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Feb 14 01:46:24.174498 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Feb 14 01:46:24.174506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174513 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Feb 14 01:46:24.174521 kernel: Detected PIPT I-cache on CPU78 Feb 14 01:46:24.174529 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Feb 14 01:46:24.174538 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Feb 14 01:46:24.174546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174554 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Feb 14 01:46:24.174561 kernel: Detected PIPT I-cache on CPU79 Feb 14 01:46:24.174569 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Feb 14 01:46:24.174577 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Feb 14 01:46:24.174585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174593 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Feb 14 01:46:24.174600 kernel: smp: Brought up 1 node, 80 CPUs Feb 14 01:46:24.174608 kernel: SMP: Total of 80 processors activated. Feb 14 01:46:24.174617 kernel: CPU features: detected: 32-bit EL0 Support Feb 14 01:46:24.174625 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 14 01:46:24.174633 kernel: CPU features: detected: Common not Private translations Feb 14 01:46:24.174640 kernel: CPU features: detected: CRC32 instructions Feb 14 01:46:24.174648 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 14 01:46:24.174656 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 14 01:46:24.174664 kernel: CPU features: detected: LSE atomic instructions Feb 14 01:46:24.174672 kernel: CPU features: detected: Privileged Access Never Feb 14 01:46:24.174679 kernel: CPU features: detected: RAS Extension Support Feb 14 01:46:24.174689 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 14 01:46:24.174696 kernel: CPU: All CPU(s) started at EL2 Feb 14 01:46:24.174704 kernel: alternatives: applying system-wide alternatives Feb 14 01:46:24.174712 kernel: devtmpfs: initialized Feb 14 01:46:24.174720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 01:46:24.174728 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.174735 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 01:46:24.174743 kernel: SMBIOS 3.4.0 present. Feb 14 01:46:24.174751 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Feb 14 01:46:24.174760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 01:46:24.174768 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Feb 14 01:46:24.174776 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 14 01:46:24.174784 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 14 01:46:24.174791 kernel: audit: initializing netlink subsys (disabled) Feb 14 01:46:24.174799 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Feb 14 01:46:24.174807 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 01:46:24.174815 kernel: cpuidle: using governor menu Feb 14 01:46:24.174822 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 14 01:46:24.174832 kernel: ASID allocator initialised with 32768 entries Feb 14 01:46:24.174840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 01:46:24.174847 kernel: Serial: AMBA PL011 UART driver Feb 14 01:46:24.174855 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 14 01:46:24.174863 kernel: Modules: 0 pages in range for non-PLT usage Feb 14 01:46:24.174870 kernel: Modules: 509040 pages in range for PLT usage Feb 14 01:46:24.174878 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 01:46:24.174886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 01:46:24.174894 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 14 01:46:24.174903 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 14 01:46:24.174911 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 01:46:24.174918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 01:46:24.174926 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 14 01:46:24.174934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 14 01:46:24.174942 kernel: ACPI: Added _OSI(Module Device) Feb 14 01:46:24.174950 kernel: ACPI: Added _OSI(Processor Device) Feb 14 01:46:24.174957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 01:46:24.174965 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 01:46:24.174974 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Feb 14 01:46:24.174982 kernel: ACPI: Interpreter enabled Feb 14 01:46:24.174990 kernel: ACPI: Using GIC for interrupt routing Feb 14 01:46:24.174997 kernel: ACPI: MCFG table detected, 8 entries Feb 14 01:46:24.175005 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175013 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175021 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175029 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175037 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175046 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175053 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175061 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175069 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Feb 14 01:46:24.175077 kernel: printk: console [ttyAMA0] enabled Feb 14 01:46:24.175085 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Feb 14 01:46:24.175093 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Feb 14 01:46:24.175229 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.175308 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.175376 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.175440 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.175503 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.175567 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Feb 14 01:46:24.175578 kernel: PCI host bridge to bus 000d:00 Feb 14 01:46:24.175649 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Feb 14 01:46:24.175712 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Feb 14 01:46:24.175770 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Feb 14 01:46:24.175853 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.175928 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.175996 kernel: pci 000d:00:01.0: enabling Extended Tags Feb 14 01:46:24.176063 kernel: pci 000d:00:01.0: supports D1 D2 Feb 14 01:46:24.176132 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176224 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.176290 kernel: pci 000d:00:02.0: supports D1 D2 Feb 14 01:46:24.176356 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176428 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.176495 kernel: pci 000d:00:03.0: supports D1 D2 Feb 14 01:46:24.176561 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176638 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.176705 kernel: pci 000d:00:04.0: supports D1 D2 Feb 14 01:46:24.176773 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176784 kernel: acpiphp: Slot [1] registered Feb 14 01:46:24.176792 kernel: acpiphp: Slot [2] registered Feb 14 01:46:24.176799 kernel: acpiphp: Slot [3] registered Feb 14 01:46:24.176807 kernel: acpiphp: Slot [4] registered Feb 14 01:46:24.176867 kernel: pci_bus 000d:00: on NUMA node 0 Feb 14 01:46:24.176935 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.177003 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.177074 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.177143 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.177214 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.177282 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.177353 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.177422 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.177488 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.177556 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.177622 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.177688 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.177755 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Feb 14 01:46:24.177825 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 01:46:24.177890 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Feb 14 01:46:24.177957 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 01:46:24.178023 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Feb 14 01:46:24.178090 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 01:46:24.178156 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Feb 14 01:46:24.178226 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 01:46:24.178294 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178361 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178428 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178494 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178561 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178629 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178696 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178763 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178832 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178898 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178964 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.179030 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.179097 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.179163 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.179233 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.179300 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.179365 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.179435 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Feb 14 01:46:24.179500 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 01:46:24.179568 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.179633 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Feb 14 01:46:24.179701 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 01:46:24.179767 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.179836 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Feb 14 01:46:24.179901 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 01:46:24.179969 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.180034 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Feb 14 01:46:24.180101 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 01:46:24.180163 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Feb 14 01:46:24.180226 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Feb 14 01:46:24.180301 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Feb 14 01:46:24.180365 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 01:46:24.180436 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Feb 14 01:46:24.180499 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 01:46:24.180579 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Feb 14 01:46:24.180644 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 01:46:24.180714 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Feb 14 01:46:24.180776 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 01:46:24.180786 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Feb 14 01:46:24.180858 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.180922 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.180987 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.181053 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.181117 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.181184 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Feb 14 01:46:24.181195 kernel: PCI host bridge to bus 0000:00 Feb 14 01:46:24.181262 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Feb 14 01:46:24.181324 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 01:46:24.181383 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 01:46:24.181460 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.181534 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.181602 kernel: pci 0000:00:01.0: enabling Extended Tags Feb 14 01:46:24.181667 kernel: pci 0000:00:01.0: supports D1 D2 Feb 14 01:46:24.181734 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.181808 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.181878 kernel: pci 0000:00:02.0: supports D1 D2 Feb 14 01:46:24.181944 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.182019 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.182085 kernel: pci 0000:00:03.0: supports D1 D2 Feb 14 01:46:24.182153 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.182229 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.182296 kernel: pci 0000:00:04.0: supports D1 D2 Feb 14 01:46:24.182364 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.182375 kernel: acpiphp: Slot [1-1] registered Feb 14 01:46:24.182383 kernel: acpiphp: Slot [2-1] registered Feb 14 01:46:24.182390 kernel: acpiphp: Slot [3-1] registered Feb 14 01:46:24.182398 kernel: acpiphp: Slot [4-1] registered Feb 14 01:46:24.182456 kernel: pci_bus 0000:00: on NUMA node 0 Feb 14 01:46:24.182523 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.182589 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.182656 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.182724 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.182790 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.182856 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.182924 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.182990 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.183057 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.183125 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.183195 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.183261 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.183329 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Feb 14 01:46:24.183395 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 01:46:24.183462 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Feb 14 01:46:24.183527 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 01:46:24.183594 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Feb 14 01:46:24.183664 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 01:46:24.183731 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Feb 14 01:46:24.183798 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 01:46:24.183864 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.183931 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.183996 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184062 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184127 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184200 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184267 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184333 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184399 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184465 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184530 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184597 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184663 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184731 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184798 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184862 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184929 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.184995 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Feb 14 01:46:24.185062 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 01:46:24.185127 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.185197 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Feb 14 01:46:24.185266 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 01:46:24.185335 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.185401 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Feb 14 01:46:24.185471 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 01:46:24.185536 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.185602 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Feb 14 01:46:24.185668 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 01:46:24.185730 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Feb 14 01:46:24.185788 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 01:46:24.185862 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Feb 14 01:46:24.185925 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 01:46:24.185994 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Feb 14 01:46:24.186058 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 01:46:24.186135 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Feb 14 01:46:24.186202 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 01:46:24.186272 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Feb 14 01:46:24.186337 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 01:46:24.186348 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Feb 14 01:46:24.186418 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.186484 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.186548 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.186612 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.186679 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.186742 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Feb 14 01:46:24.186753 kernel: PCI host bridge to bus 0005:00 Feb 14 01:46:24.186819 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Feb 14 01:46:24.186881 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 01:46:24.186941 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Feb 14 01:46:24.187016 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.187098 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.187165 kernel: pci 0005:00:01.0: supports D1 D2 Feb 14 01:46:24.187237 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187311 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.187381 kernel: pci 0005:00:03.0: supports D1 D2 Feb 14 01:46:24.187448 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187522 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.187591 kernel: pci 0005:00:05.0: supports D1 D2 Feb 14 01:46:24.187659 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187733 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 01:46:24.187800 kernel: pci 0005:00:07.0: supports D1 D2 Feb 14 01:46:24.187866 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187877 kernel: acpiphp: Slot [1-2] registered Feb 14 01:46:24.187885 kernel: acpiphp: Slot [2-2] registered Feb 14 01:46:24.187960 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Feb 14 01:46:24.188030 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Feb 14 01:46:24.188098 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Feb 14 01:46:24.188213 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Feb 14 01:46:24.188296 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Feb 14 01:46:24.188369 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Feb 14 01:46:24.188429 kernel: pci_bus 0005:00: on NUMA node 0 Feb 14 01:46:24.188500 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.188566 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.188637 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.188715 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.188782 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.188849 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.188920 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.188988 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.189054 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 01:46:24.189122 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.189193 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.189261 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Feb 14 01:46:24.189327 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Feb 14 01:46:24.189399 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 01:46:24.189465 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Feb 14 01:46:24.189532 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 01:46:24.189598 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Feb 14 01:46:24.189665 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 01:46:24.189732 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Feb 14 01:46:24.189798 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 01:46:24.189866 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.189934 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190001 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190067 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190135 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190205 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190273 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190339 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190406 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190475 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190541 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190606 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190673 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190740 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190806 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190874 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190940 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.191005 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Feb 14 01:46:24.191074 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 01:46:24.191141 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Feb 14 01:46:24.191217 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Feb 14 01:46:24.191282 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 01:46:24.191353 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Feb 14 01:46:24.191421 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Feb 14 01:46:24.191490 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Feb 14 01:46:24.191556 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Feb 14 01:46:24.191623 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 01:46:24.191694 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Feb 14 01:46:24.191762 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Feb 14 01:46:24.191829 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Feb 14 01:46:24.191895 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Feb 14 01:46:24.191965 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 01:46:24.192026 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Feb 14 01:46:24.192087 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 01:46:24.192159 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Feb 14 01:46:24.192226 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 01:46:24.192304 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Feb 14 01:46:24.192370 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 01:46:24.192438 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Feb 14 01:46:24.192502 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 01:46:24.192571 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Feb 14 01:46:24.192635 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 01:46:24.192646 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Feb 14 01:46:24.192726 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.192795 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.192866 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.192934 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.193005 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.193070 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Feb 14 01:46:24.193080 kernel: PCI host bridge to bus 0003:00 Feb 14 01:46:24.193151 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Feb 14 01:46:24.193216 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Feb 14 01:46:24.193276 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Feb 14 01:46:24.193349 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.193428 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.193495 kernel: pci 0003:00:01.0: supports D1 D2 Feb 14 01:46:24.193565 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.193637 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.193706 kernel: pci 0003:00:03.0: supports D1 D2 Feb 14 01:46:24.193774 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.193846 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.193916 kernel: pci 0003:00:05.0: supports D1 D2 Feb 14 01:46:24.193981 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.193994 kernel: acpiphp: Slot [1-3] registered Feb 14 01:46:24.194002 kernel: acpiphp: Slot [2-3] registered Feb 14 01:46:24.194078 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Feb 14 01:46:24.194148 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Feb 14 01:46:24.194270 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Feb 14 01:46:24.194341 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Feb 14 01:46:24.194409 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 01:46:24.194475 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Feb 14 01:46:24.194545 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 01:46:24.194611 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Feb 14 01:46:24.194678 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 01:46:24.194746 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Feb 14 01:46:24.194821 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Feb 14 01:46:24.194889 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Feb 14 01:46:24.194955 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Feb 14 01:46:24.195025 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Feb 14 01:46:24.195091 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Feb 14 01:46:24.195158 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Feb 14 01:46:24.195235 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 01:46:24.195304 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Feb 14 01:46:24.195370 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 01:46:24.195429 kernel: pci_bus 0003:00: on NUMA node 0 Feb 14 01:46:24.195500 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.195565 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.195630 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.195696 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.195761 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.195825 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.195892 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Feb 14 01:46:24.195958 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Feb 14 01:46:24.196025 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 14 01:46:24.196090 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 01:46:24.196157 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 14 01:46:24.196227 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 01:46:24.196307 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 14 01:46:24.196377 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 01:46:24.196443 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196512 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196578 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196645 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196710 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196776 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196842 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196908 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196974 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.197043 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.197108 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.197176 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.197247 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.197313 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Feb 14 01:46:24.197381 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 01:46:24.197446 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Feb 14 01:46:24.197513 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Feb 14 01:46:24.197581 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 01:46:24.197651 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Feb 14 01:46:24.197724 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Feb 14 01:46:24.197793 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Feb 14 01:46:24.197862 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Feb 14 01:46:24.197931 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Feb 14 01:46:24.198002 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Feb 14 01:46:24.198072 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Feb 14 01:46:24.198141 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Feb 14 01:46:24.198213 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198282 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198350 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198419 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198489 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198558 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198627 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198696 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198764 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Feb 14 01:46:24.198830 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Feb 14 01:46:24.198897 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 01:46:24.198961 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 01:46:24.199025 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Feb 14 01:46:24.199084 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Feb 14 01:46:24.199167 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Feb 14 01:46:24.199232 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 01:46:24.199304 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Feb 14 01:46:24.199366 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 01:46:24.199438 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Feb 14 01:46:24.199500 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 01:46:24.199511 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Feb 14 01:46:24.199584 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.199650 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.199714 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.199781 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.199845 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.199912 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Feb 14 01:46:24.199924 kernel: PCI host bridge to bus 000c:00 Feb 14 01:46:24.199991 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Feb 14 01:46:24.200052 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Feb 14 01:46:24.200110 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Feb 14 01:46:24.200192 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.200266 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.200334 kernel: pci 000c:00:01.0: enabling Extended Tags Feb 14 01:46:24.200400 kernel: pci 000c:00:01.0: supports D1 D2 Feb 14 01:46:24.200467 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.200540 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.200608 kernel: pci 000c:00:02.0: supports D1 D2 Feb 14 01:46:24.200678 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.200752 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.200820 kernel: pci 000c:00:03.0: supports D1 D2 Feb 14 01:46:24.200886 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.200959 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.201026 kernel: pci 000c:00:04.0: supports D1 D2 Feb 14 01:46:24.201093 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.201106 kernel: acpiphp: Slot [1-4] registered Feb 14 01:46:24.201114 kernel: acpiphp: Slot [2-4] registered Feb 14 01:46:24.201123 kernel: acpiphp: Slot [3-2] registered Feb 14 01:46:24.201131 kernel: acpiphp: Slot [4-2] registered Feb 14 01:46:24.201445 kernel: pci_bus 000c:00: on NUMA node 0 Feb 14 01:46:24.201526 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.201592 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.201658 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.201728 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.201794 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.201860 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.201925 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.201990 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.202055 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.202123 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.202197 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.202263 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.202330 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Feb 14 01:46:24.202395 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 01:46:24.202460 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Feb 14 01:46:24.202525 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 01:46:24.202590 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Feb 14 01:46:24.202658 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 01:46:24.202723 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Feb 14 01:46:24.202789 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 01:46:24.202853 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.202919 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.202984 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203049 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203113 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203184 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203249 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203314 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203379 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203444 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203508 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203573 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203638 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203702 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203770 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203836 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203901 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.203965 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Feb 14 01:46:24.204031 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 01:46:24.204096 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.204162 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Feb 14 01:46:24.204233 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 01:46:24.204299 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.204364 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Feb 14 01:46:24.204429 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 01:46:24.204495 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.204559 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Feb 14 01:46:24.204627 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 01:46:24.204687 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Feb 14 01:46:24.204746 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Feb 14 01:46:24.204816 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Feb 14 01:46:24.204878 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 01:46:24.204955 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Feb 14 01:46:24.205017 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 01:46:24.205088 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Feb 14 01:46:24.205149 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 01:46:24.205221 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Feb 14 01:46:24.205284 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 01:46:24.205294 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Feb 14 01:46:24.205365 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.205432 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.205495 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.205557 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.205620 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.205682 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Feb 14 01:46:24.205693 kernel: PCI host bridge to bus 0002:00 Feb 14 01:46:24.205761 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Feb 14 01:46:24.205821 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Feb 14 01:46:24.205880 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Feb 14 01:46:24.205952 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.206026 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.206091 kernel: pci 0002:00:01.0: supports D1 D2 Feb 14 01:46:24.206157 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206233 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.206301 kernel: pci 0002:00:03.0: supports D1 D2 Feb 14 01:46:24.206366 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206438 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.206503 kernel: pci 0002:00:05.0: supports D1 D2 Feb 14 01:46:24.206568 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206640 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 01:46:24.206708 kernel: pci 0002:00:07.0: supports D1 D2 Feb 14 01:46:24.206773 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206783 kernel: acpiphp: Slot [1-5] registered Feb 14 01:46:24.206792 kernel: acpiphp: Slot [2-5] registered Feb 14 01:46:24.206800 kernel: acpiphp: Slot [3-3] registered Feb 14 01:46:24.206808 kernel: acpiphp: Slot [4-3] registered Feb 14 01:46:24.206864 kernel: pci_bus 0002:00: on NUMA node 0 Feb 14 01:46:24.206929 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.206994 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.207066 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.207135 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.207204 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.207270 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.207340 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.207406 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.207474 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.207540 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.207606 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.207673 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.207739 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Feb 14 01:46:24.207808 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 01:46:24.207873 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Feb 14 01:46:24.207938 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 01:46:24.208002 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Feb 14 01:46:24.208068 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 01:46:24.208133 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Feb 14 01:46:24.208202 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 01:46:24.208266 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208335 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208399 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208468 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208533 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208598 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208663 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208727 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208793 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208860 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208926 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208990 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.209056 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.209120 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.209214 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.209283 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.209348 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.209412 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Feb 14 01:46:24.209481 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 01:46:24.209545 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Feb 14 01:46:24.209609 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Feb 14 01:46:24.209674 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 01:46:24.209739 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Feb 14 01:46:24.209803 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Feb 14 01:46:24.209870 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 01:46:24.209935 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Feb 14 01:46:24.210000 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Feb 14 01:46:24.210064 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 01:46:24.210125 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Feb 14 01:46:24.210188 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Feb 14 01:46:24.210263 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Feb 14 01:46:24.210325 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 01:46:24.210394 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Feb 14 01:46:24.210454 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 01:46:24.210531 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Feb 14 01:46:24.210593 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 01:46:24.210663 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Feb 14 01:46:24.210724 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 01:46:24.210736 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Feb 14 01:46:24.210807 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.210872 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.210937 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.211002 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.211068 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.211132 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Feb 14 01:46:24.211143 kernel: PCI host bridge to bus 0001:00 Feb 14 01:46:24.211211 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Feb 14 01:46:24.211273 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Feb 14 01:46:24.211333 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Feb 14 01:46:24.211409 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.211482 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.211548 kernel: pci 0001:00:01.0: enabling Extended Tags Feb 14 01:46:24.211624 kernel: pci 0001:00:01.0: supports D1 D2 Feb 14 01:46:24.211692 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.211765 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.211832 kernel: pci 0001:00:02.0: supports D1 D2 Feb 14 01:46:24.211900 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.211972 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.212038 kernel: pci 0001:00:03.0: supports D1 D2 Feb 14 01:46:24.212103 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.212177 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.212250 kernel: pci 0001:00:04.0: supports D1 D2 Feb 14 01:46:24.212318 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.212331 kernel: acpiphp: Slot [1-6] registered Feb 14 01:46:24.212404 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 14 01:46:24.212473 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.212541 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Feb 14 01:46:24.212608 kernel: pci 0001:01:00.0: PME# supported from D3cold Feb 14 01:46:24.212676 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 01:46:24.212750 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 14 01:46:24.212822 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 01:46:24.212889 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Feb 14 01:46:24.212956 kernel: pci 0001:01:00.1: PME# supported from D3cold Feb 14 01:46:24.212967 kernel: acpiphp: Slot [2-6] registered Feb 14 01:46:24.212975 kernel: acpiphp: Slot [3-4] registered Feb 14 01:46:24.212983 kernel: acpiphp: Slot [4-4] registered Feb 14 01:46:24.213041 kernel: pci_bus 0001:00: on NUMA node 0 Feb 14 01:46:24.213108 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.213302 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.213394 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.213461 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.213528 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.213593 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.213658 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.213723 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.213792 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.213859 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.213925 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.213990 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Feb 14 01:46:24.214055 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Feb 14 01:46:24.214120 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 01:46:24.214189 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Feb 14 01:46:24.214258 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 01:46:24.214322 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Feb 14 01:46:24.214388 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 01:46:24.214452 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214517 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214580 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214645 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214710 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214777 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214843 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214907 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214973 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215037 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215102 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215166 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215234 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215301 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215370 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215434 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215502 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 01:46:24.215569 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.215637 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Feb 14 01:46:24.215704 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Feb 14 01:46:24.215768 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.215836 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Feb 14 01:46:24.215900 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.215966 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.216030 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Feb 14 01:46:24.216095 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 01:46:24.216160 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.216230 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Feb 14 01:46:24.216295 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 01:46:24.216361 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.216425 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Feb 14 01:46:24.216491 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 01:46:24.216551 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Feb 14 01:46:24.216609 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Feb 14 01:46:24.216692 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Feb 14 01:46:24.216753 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.216822 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Feb 14 01:46:24.216883 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 01:46:24.216951 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Feb 14 01:46:24.217012 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 01:46:24.217082 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Feb 14 01:46:24.217142 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 01:46:24.217153 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Feb 14 01:46:24.217227 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.217292 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.217355 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.217420 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.217484 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.217547 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Feb 14 01:46:24.217558 kernel: PCI host bridge to bus 0004:00 Feb 14 01:46:24.217622 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Feb 14 01:46:24.217682 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Feb 14 01:46:24.217739 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Feb 14 01:46:24.217815 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.217887 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.217954 kernel: pci 0004:00:01.0: supports D1 D2 Feb 14 01:46:24.218019 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.218092 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.218158 kernel: pci 0004:00:03.0: supports D1 D2 Feb 14 01:46:24.218228 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.218304 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.218371 kernel: pci 0004:00:05.0: supports D1 D2 Feb 14 01:46:24.218436 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.218512 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Feb 14 01:46:24.218581 kernel: pci 0004:01:00.0: enabling Extended Tags Feb 14 01:46:24.218647 kernel: pci 0004:01:00.0: supports D1 D2 Feb 14 01:46:24.218714 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 01:46:24.218795 kernel: pci_bus 0004:02: extended config space not accessible Feb 14 01:46:24.218874 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Feb 14 01:46:24.218944 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Feb 14 01:46:24.219014 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Feb 14 01:46:24.219084 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Feb 14 01:46:24.219153 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Feb 14 01:46:24.219226 kernel: pci 0004:02:00.0: supports D1 D2 Feb 14 01:46:24.219299 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 01:46:24.219377 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Feb 14 01:46:24.219445 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Feb 14 01:46:24.219512 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 01:46:24.219574 kernel: pci_bus 0004:00: on NUMA node 0 Feb 14 01:46:24.219639 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Feb 14 01:46:24.219707 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.219774 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.219840 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 01:46:24.219907 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.219973 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.220038 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.220104 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.220168 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 01:46:24.220241 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Feb 14 01:46:24.220307 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 01:46:24.220373 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Feb 14 01:46:24.220438 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 01:46:24.220504 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220571 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.220635 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220701 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.220769 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220835 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.220900 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220965 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221031 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.221097 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221161 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.221230 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221299 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.221370 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.221439 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221509 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Feb 14 01:46:24.221580 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Feb 14 01:46:24.221650 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Feb 14 01:46:24.221720 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Feb 14 01:46:24.221787 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Feb 14 01:46:24.221857 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.221923 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Feb 14 01:46:24.221988 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.222054 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 01:46:24.222122 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Feb 14 01:46:24.222191 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.222257 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Feb 14 01:46:24.222322 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 01:46:24.222390 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Feb 14 01:46:24.222456 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Feb 14 01:46:24.222521 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 01:46:24.222582 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 01:46:24.222640 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Feb 14 01:46:24.222701 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Feb 14 01:46:24.222772 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.222834 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 01:46:24.222900 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.222968 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Feb 14 01:46:24.223030 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 01:46:24.223098 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Feb 14 01:46:24.223161 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 01:46:24.223172 kernel: iommu: Default domain type: Translated Feb 14 01:46:24.223184 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 14 01:46:24.223193 kernel: efivars: Registered efivars operations Feb 14 01:46:24.223264 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Feb 14 01:46:24.223335 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Feb 14 01:46:24.223405 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Feb 14 01:46:24.223416 kernel: vgaarb: loaded Feb 14 01:46:24.223427 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 14 01:46:24.223436 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 01:46:24.223444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 01:46:24.223453 kernel: pnp: PnP ACPI init Feb 14 01:46:24.223524 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Feb 14 01:46:24.223585 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Feb 14 01:46:24.223646 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Feb 14 01:46:24.223708 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Feb 14 01:46:24.223768 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Feb 14 01:46:24.223828 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Feb 14 01:46:24.223889 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Feb 14 01:46:24.223949 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Feb 14 01:46:24.223960 kernel: pnp: PnP ACPI: found 1 devices Feb 14 01:46:24.223968 kernel: NET: Registered PF_INET protocol family Feb 14 01:46:24.223977 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.223987 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 14 01:46:24.223996 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 01:46:24.224004 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 14 01:46:24.224012 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.224021 kernel: TCP: Hash tables configured (established 524288 bind 65536) Feb 14 01:46:24.224029 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.224037 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.224046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 01:46:24.224114 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Feb 14 01:46:24.224127 kernel: kvm [1]: IPA Size Limit: 48 bits Feb 14 01:46:24.224136 kernel: kvm [1]: GICv3: no GICV resource entry Feb 14 01:46:24.224144 kernel: kvm [1]: disabling GICv2 emulation Feb 14 01:46:24.224152 kernel: kvm [1]: GIC system register CPU interface enabled Feb 14 01:46:24.224161 kernel: kvm [1]: vgic interrupt IRQ9 Feb 14 01:46:24.224169 kernel: kvm [1]: VHE mode initialized successfully Feb 14 01:46:24.224177 kernel: Initialise system trusted keyrings Feb 14 01:46:24.224189 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Feb 14 01:46:24.224198 kernel: Key type asymmetric registered Feb 14 01:46:24.224207 kernel: Asymmetric key parser 'x509' registered Feb 14 01:46:24.224215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 14 01:46:24.224223 kernel: io scheduler mq-deadline registered Feb 14 01:46:24.224232 kernel: io scheduler kyber registered Feb 14 01:46:24.224240 kernel: io scheduler bfq registered Feb 14 01:46:24.224248 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 14 01:46:24.224256 kernel: ACPI: button: Power Button [PWRB] Feb 14 01:46:24.224265 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Feb 14 01:46:24.224273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 01:46:24.224351 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Feb 14 01:46:24.224414 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.224477 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.224538 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.224600 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Feb 14 01:46:24.224661 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Feb 14 01:46:24.224733 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Feb 14 01:46:24.224794 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.224856 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.224917 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.224978 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Feb 14 01:46:24.225040 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Feb 14 01:46:24.225108 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Feb 14 01:46:24.225173 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.225238 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.225303 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.225364 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Feb 14 01:46:24.225427 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Feb 14 01:46:24.225496 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Feb 14 01:46:24.225562 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.225624 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.225687 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.225748 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Feb 14 01:46:24.225812 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Feb 14 01:46:24.225890 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Feb 14 01:46:24.225953 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.226018 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.226079 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.226143 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Feb 14 01:46:24.226207 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Feb 14 01:46:24.226281 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Feb 14 01:46:24.226344 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.226409 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.226471 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.226533 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Feb 14 01:46:24.226595 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Feb 14 01:46:24.226665 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Feb 14 01:46:24.226727 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.226790 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.226855 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.226918 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Feb 14 01:46:24.226984 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Feb 14 01:46:24.227051 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Feb 14 01:46:24.227115 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.227177 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.227246 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.227309 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Feb 14 01:46:24.227372 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Feb 14 01:46:24.227383 kernel: thunder_xcv, ver 1.0 Feb 14 01:46:24.227391 kernel: thunder_bgx, ver 1.0 Feb 14 01:46:24.227400 kernel: nicpf, ver 1.0 Feb 14 01:46:24.227408 kernel: nicvf, ver 1.0 Feb 14 01:46:24.227477 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 14 01:46:24.227544 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-14T01:46:22 UTC (1739497582) Feb 14 01:46:24.227555 kernel: efifb: probing for efifb Feb 14 01:46:24.227563 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Feb 14 01:46:24.227572 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Feb 14 01:46:24.227580 kernel: efifb: scrolling: redraw Feb 14 01:46:24.227588 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 14 01:46:24.227597 kernel: Console: switching to colour frame buffer device 100x37 Feb 14 01:46:24.227605 kernel: fb0: EFI VGA frame buffer device Feb 14 01:46:24.227615 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Feb 14 01:46:24.227624 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 01:46:24.227632 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 14 01:46:24.227640 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 14 01:46:24.227648 kernel: watchdog: Hard watchdog permanently disabled Feb 14 01:46:24.227657 kernel: NET: Registered PF_INET6 protocol family Feb 14 01:46:24.227665 kernel: Segment Routing with IPv6 Feb 14 01:46:24.227673 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 01:46:24.227681 kernel: NET: Registered PF_PACKET protocol family Feb 14 01:46:24.227689 kernel: Key type dns_resolver registered Feb 14 01:46:24.227698 kernel: registered taskstats version 1 Feb 14 01:46:24.227707 kernel: Loading compiled-in X.509 certificates Feb 14 01:46:24.227715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 14 01:46:24.227723 kernel: Key type .fscrypt registered Feb 14 01:46:24.227731 kernel: Key type fscrypt-provisioning registered Feb 14 01:46:24.227741 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 01:46:24.227749 kernel: ima: Allocated hash algorithm: sha1 Feb 14 01:46:24.227757 kernel: ima: No architecture policies found Feb 14 01:46:24.227766 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 14 01:46:24.227836 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Feb 14 01:46:24.227906 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Feb 14 01:46:24.227974 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Feb 14 01:46:24.228041 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Feb 14 01:46:24.228109 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Feb 14 01:46:24.228177 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Feb 14 01:46:24.228250 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Feb 14 01:46:24.228316 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Feb 14 01:46:24.228388 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Feb 14 01:46:24.228454 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228523 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Feb 14 01:46:24.228590 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228658 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Feb 14 01:46:24.228725 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228794 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Feb 14 01:46:24.228860 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228933 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Feb 14 01:46:24.228999 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229068 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Feb 14 01:46:24.229134 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229206 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Feb 14 01:46:24.229272 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229340 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Feb 14 01:46:24.229408 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229476 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Feb 14 01:46:24.229545 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Feb 14 01:46:24.229612 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Feb 14 01:46:24.229680 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Feb 14 01:46:24.229747 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Feb 14 01:46:24.229814 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Feb 14 01:46:24.229882 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Feb 14 01:46:24.229950 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230018 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Feb 14 01:46:24.230087 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230154 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Feb 14 01:46:24.230224 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230292 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Feb 14 01:46:24.230362 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230431 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Feb 14 01:46:24.230498 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230565 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Feb 14 01:46:24.230634 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230702 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Feb 14 01:46:24.230768 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230836 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Feb 14 01:46:24.230902 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230971 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Feb 14 01:46:24.231036 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231104 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Feb 14 01:46:24.231172 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231246 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Feb 14 01:46:24.231312 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231380 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Feb 14 01:46:24.231447 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231516 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Feb 14 01:46:24.231583 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Feb 14 01:46:24.231650 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Feb 14 01:46:24.231721 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Feb 14 01:46:24.231789 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Feb 14 01:46:24.231857 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Feb 14 01:46:24.231927 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Feb 14 01:46:24.231938 kernel: clk: Disabling unused clocks Feb 14 01:46:24.231946 kernel: Freeing unused kernel memory: 39360K Feb 14 01:46:24.231954 kernel: Run /init as init process Feb 14 01:46:24.231963 kernel: with arguments: Feb 14 01:46:24.231973 kernel: /init Feb 14 01:46:24.231981 kernel: with environment: Feb 14 01:46:24.231989 kernel: HOME=/ Feb 14 01:46:24.231997 kernel: TERM=linux Feb 14 01:46:24.232005 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 01:46:24.232016 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:46:24.232026 systemd[1]: Detected architecture arm64. Feb 14 01:46:24.232035 systemd[1]: Running in initrd. Feb 14 01:46:24.232045 systemd[1]: No hostname configured, using default hostname. Feb 14 01:46:24.232054 systemd[1]: Hostname set to . Feb 14 01:46:24.232062 systemd[1]: Initializing machine ID from random generator. Feb 14 01:46:24.232072 systemd[1]: Queued start job for default target initrd.target. Feb 14 01:46:24.232081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:46:24.232089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:46:24.232099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 01:46:24.232108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:46:24.232118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 01:46:24.232127 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 01:46:24.232136 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 01:46:24.232146 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 01:46:24.232154 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:46:24.232163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:46:24.232171 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:46:24.232186 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:46:24.232195 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:46:24.232203 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:46:24.232212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:46:24.232220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:46:24.232229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 01:46:24.232238 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 01:46:24.232246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:46:24.232257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:46:24.232266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:46:24.232274 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:46:24.232283 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 01:46:24.232291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:46:24.232300 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 01:46:24.232309 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 01:46:24.232317 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:46:24.232326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:46:24.232360 systemd-journald[901]: Collecting audit messages is disabled. Feb 14 01:46:24.232381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:24.232390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 01:46:24.232399 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 01:46:24.232409 kernel: Bridge firewalling registered Feb 14 01:46:24.232418 systemd-journald[901]: Journal started Feb 14 01:46:24.232437 systemd-journald[901]: Runtime Journal (/run/log/journal/b8f0334ac3dd44128ddc77c1b71d4a2d) is 8.0M, max 4.0G, 3.9G free. Feb 14 01:46:24.188801 systemd-modules-load[903]: Inserted module 'overlay' Feb 14 01:46:24.263878 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:46:24.211790 systemd-modules-load[903]: Inserted module 'br_netfilter' Feb 14 01:46:24.269429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:46:24.280108 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 01:46:24.290953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:46:24.301588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:24.330306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:46:24.360330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:46:24.366541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:46:24.377526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:46:24.393737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:24.409833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:46:24.426389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:46:24.437629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:46:24.466283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 01:46:24.479582 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:46:24.486077 dracut-cmdline[946]: dracut-dracut-053 Feb 14 01:46:24.499096 dracut-cmdline[946]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 01:46:24.493237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:46:24.507251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:46:24.516305 systemd-resolved[955]: Positive Trust Anchors: Feb 14 01:46:24.516314 systemd-resolved[955]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:46:24.516347 systemd-resolved[955]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:46:24.531343 systemd-resolved[955]: Defaulting to hostname 'linux'. Feb 14 01:46:24.544199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:46:24.660271 kernel: SCSI subsystem initialized Feb 14 01:46:24.563364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:46:24.676198 kernel: Loading iSCSI transport class v2.0-870. Feb 14 01:46:24.689194 kernel: iscsi: registered transport (tcp) Feb 14 01:46:24.716884 kernel: iscsi: registered transport (qla4xxx) Feb 14 01:46:24.716906 kernel: QLogic iSCSI HBA Driver Feb 14 01:46:24.762221 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 01:46:24.786353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 01:46:24.831294 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 01:46:24.831312 kernel: device-mapper: uevent: version 1.0.3 Feb 14 01:46:24.841001 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 01:46:24.906190 kernel: raid6: neonx8 gen() 15848 MB/s Feb 14 01:46:24.932189 kernel: raid6: neonx4 gen() 15716 MB/s Feb 14 01:46:24.957189 kernel: raid6: neonx2 gen() 13476 MB/s Feb 14 01:46:24.982189 kernel: raid6: neonx1 gen() 10551 MB/s Feb 14 01:46:25.007189 kernel: raid6: int64x8 gen() 6985 MB/s Feb 14 01:46:25.032189 kernel: raid6: int64x4 gen() 7384 MB/s Feb 14 01:46:25.057189 kernel: raid6: int64x2 gen() 6152 MB/s Feb 14 01:46:25.085505 kernel: raid6: int64x1 gen() 5075 MB/s Feb 14 01:46:25.085526 kernel: raid6: using algorithm neonx8 gen() 15848 MB/s Feb 14 01:46:25.119984 kernel: raid6: .... xor() 11969 MB/s, rmw enabled Feb 14 01:46:25.120005 kernel: raid6: using neon recovery algorithm Feb 14 01:46:25.143311 kernel: xor: measuring software checksum speed Feb 14 01:46:25.143335 kernel: 8regs : 19807 MB/sec Feb 14 01:46:25.151390 kernel: 32regs : 19646 MB/sec Feb 14 01:46:25.159204 kernel: arm64_neon : 27204 MB/sec Feb 14 01:46:25.166898 kernel: xor: using function: arm64_neon (27204 MB/sec) Feb 14 01:46:25.228188 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 01:46:25.237795 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:46:25.250323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:46:25.263486 systemd-udevd[1146]: Using default interface naming scheme 'v255'. Feb 14 01:46:25.266562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:46:25.287283 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 01:46:25.301397 dracut-pre-trigger[1156]: rd.md=0: removing MD RAID activation Feb 14 01:46:25.327878 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:46:25.348342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:46:25.453954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:46:25.483571 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 14 01:46:25.483594 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 14 01:46:25.485298 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 01:46:25.643618 kernel: ACPI: bus type USB registered Feb 14 01:46:25.643653 kernel: usbcore: registered new interface driver usbfs Feb 14 01:46:25.643675 kernel: usbcore: registered new interface driver hub Feb 14 01:46:25.643695 kernel: usbcore: registered new device driver usb Feb 14 01:46:25.643711 kernel: PTP clock support registered Feb 14 01:46:25.643721 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 Feb 14 01:46:25.893891 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 01:46:25.893987 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 01:46:25.894073 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Feb 14 01:46:25.894155 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 14 01:46:25.894166 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 Feb 14 01:46:26.532900 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 14 01:46:26.532924 kernel: igb 0003:03:00.0: Adding to iommu group 33 Feb 14 01:46:26.533096 kernel: nvme 0005:03:00.0: Adding to iommu group 34 Feb 14 01:46:26.533199 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Feb 14 01:46:26.533289 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 Feb 14 01:46:26.533371 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 01:46:26.533454 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 01:46:26.533536 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 01:46:26.533614 kernel: hub 1-0:1.0: USB hub found Feb 14 01:46:26.533721 kernel: hub 1-0:1.0: 4 ports detected Feb 14 01:46:26.533808 kernel: mlx5_core 0001:01:00.0: firmware version: 14.30.1004 Feb 14 01:46:26.533889 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 01:46:26.533968 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 01:46:26.534105 kernel: hub 2-0:1.0: USB hub found Feb 14 01:46:26.534214 kernel: hub 2-0:1.0: 4 ports detected Feb 14 01:46:26.534304 kernel: nvme nvme0: pci function 0005:03:00.0 Feb 14 01:46:26.534396 kernel: nvme nvme1: pci function 0005:04:00.0 Feb 14 01:46:26.534480 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Feb 14 01:46:26.534557 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Feb 14 01:46:26.534632 kernel: igb 0003:03:00.0: added PHC on eth0 Feb 14 01:46:26.534717 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 14 01:46:26.534798 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6c Feb 14 01:46:26.534876 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Feb 14 01:46:26.534955 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 01:46:26.535034 kernel: igb 0003:03:00.1: Adding to iommu group 36 Feb 14 01:46:26.535119 kernel: nvme nvme0: 32/0/0 default/read/poll queues Feb 14 01:46:26.535203 kernel: nvme nvme1: 32/0/0 default/read/poll queues Feb 14 01:46:26.535280 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 01:46:26.535292 kernel: GPT:9289727 != 1875385007 Feb 14 01:46:26.535302 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 01:46:26.535311 kernel: GPT:9289727 != 1875385007 Feb 14 01:46:26.535321 kernel: igb 0003:03:00.1: added PHC on eth1 Feb 14 01:46:26.535401 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 01:46:26.535411 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:26.535424 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Feb 14 01:46:26.535504 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Feb 14 01:46:26.535638 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6d Feb 14 01:46:26.535721 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1207) Feb 14 01:46:26.535732 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Feb 14 01:46:26.535810 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (1225) Feb 14 01:46:26.535821 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 01:46:26.535903 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Feb 14 01:46:26.535983 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Feb 14 01:46:26.536063 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Feb 14 01:46:26.536143 kernel: hub 1-3:1.0: USB hub found Feb 14 01:46:26.536246 kernel: hub 1-3:1.0: 4 ports detected Feb 14 01:46:26.536336 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:26.536347 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:26.536357 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Feb 14 01:46:26.536489 kernel: hub 2-3:1.0: USB hub found Feb 14 01:46:26.536588 kernel: hub 2-3:1.0: 4 ports detected Feb 14 01:46:26.536677 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 01:46:26.536761 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Feb 14 01:46:27.122399 kernel: mlx5_core 0001:01:00.1: firmware version: 14.30.1004 Feb 14 01:46:27.122538 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 01:46:27.122618 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Feb 14 01:46:27.122698 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 01:46:25.547436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:46:27.138457 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Feb 14 01:46:25.547587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:27.159716 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Feb 14 01:46:25.671805 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:46:25.677577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:46:25.677733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:25.683508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:25.698527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:25.704613 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 01:46:27.198600 disk-uuid[1309]: Primary Header is updated. Feb 14 01:46:27.198600 disk-uuid[1309]: Secondary Entries is updated. Feb 14 01:46:27.198600 disk-uuid[1309]: Secondary Header is updated. Feb 14 01:46:25.711749 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:46:25.717431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:46:25.722964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:46:25.740345 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 01:46:25.746266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:46:25.746340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:25.753468 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:46:25.767292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:25.777138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:25.886285 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:46:26.044330 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:26.178014 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Feb 14 01:46:26.278820 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Feb 14 01:46:26.287976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Feb 14 01:46:26.295952 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 01:46:26.300407 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 01:46:26.317329 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 01:46:27.359682 disk-uuid[1310]: The operation has completed successfully. Feb 14 01:46:27.365335 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:27.384798 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 01:46:27.384882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 01:46:27.419327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 01:46:27.430501 sh[1487]: Success Feb 14 01:46:27.449186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 14 01:46:27.482380 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 01:46:27.503398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 01:46:27.514823 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 01:46:27.609013 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 14 01:46:27.609039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:27.609059 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 01:46:27.609086 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 01:46:27.609106 kernel: BTRFS info (device dm-0): using free space tree Feb 14 01:46:27.609125 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 14 01:46:27.615158 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 01:46:27.622470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 01:46:27.636347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 01:46:27.716014 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:27.716029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:27.716040 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 01:46:27.716050 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 01:46:27.716060 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 01:46:27.643575 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 01:46:27.752550 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:27.742811 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 01:46:27.771310 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 01:46:27.827245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:46:27.836339 ignition[1590]: Ignition 2.19.0 Feb 14 01:46:27.836345 ignition[1590]: Stage: fetch-offline Feb 14 01:46:27.843674 unknown[1590]: fetched base config from "system" Feb 14 01:46:27.836401 ignition[1590]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:27.843682 unknown[1590]: fetched user config from "system" Feb 14 01:46:27.836409 ignition[1590]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:27.856406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:46:27.836705 ignition[1590]: parsed url from cmdline: "" Feb 14 01:46:27.864043 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:46:27.836708 ignition[1590]: no config URL provided Feb 14 01:46:27.880095 systemd-networkd[1729]: lo: Link UP Feb 14 01:46:27.836712 ignition[1590]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 01:46:27.880099 systemd-networkd[1729]: lo: Gained carrier Feb 14 01:46:27.836763 ignition[1590]: parsing config with SHA512: e0682aa1e70f1d06cb393943ecc32de9ba1c769cd8dbad1b366262e529c60d403786d44f6970e26b9361bd92edf746b325df0a836b8285effa3f6eb191d2e5e3 Feb 14 01:46:27.883729 systemd-networkd[1729]: Enumeration completed Feb 14 01:46:27.844267 ignition[1590]: fetch-offline: fetch-offline passed Feb 14 01:46:27.883847 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:46:27.844271 ignition[1590]: POST message to Packet Timeline Feb 14 01:46:27.884846 systemd-networkd[1729]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:27.844276 ignition[1590]: POST Status error: resource requires networking Feb 14 01:46:27.889277 systemd[1]: Reached target network.target - Network. Feb 14 01:46:27.844344 ignition[1590]: Ignition finished successfully Feb 14 01:46:27.898969 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 14 01:46:27.936692 ignition[1735]: Ignition 2.19.0 Feb 14 01:46:27.912342 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 01:46:27.936697 ignition[1735]: Stage: kargs Feb 14 01:46:27.936639 systemd-networkd[1729]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:27.936983 ignition[1735]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:27.987768 systemd-networkd[1729]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:27.936992 ignition[1735]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:27.938160 ignition[1735]: kargs: kargs passed Feb 14 01:46:27.938164 ignition[1735]: POST message to Packet Timeline Feb 14 01:46:27.938178 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:27.940873 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50095->[::1]:53: read: connection refused Feb 14 01:46:28.140951 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #2 Feb 14 01:46:28.141353 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49058->[::1]:53: read: connection refused Feb 14 01:46:28.536195 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Feb 14 01:46:28.538996 systemd-networkd[1729]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:28.541509 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #3 Feb 14 01:46:28.541910 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34009->[::1]:53: read: connection refused Feb 14 01:46:29.120192 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Feb 14 01:46:29.122826 systemd-networkd[1729]: eno1: Link UP Feb 14 01:46:29.122955 systemd-networkd[1729]: eno2: Link UP Feb 14 01:46:29.123070 systemd-networkd[1729]: enP1p1s0f0np0: Link UP Feb 14 01:46:29.123221 systemd-networkd[1729]: enP1p1s0f0np0: Gained carrier Feb 14 01:46:29.135404 systemd-networkd[1729]: enP1p1s0f1np1: Link UP Feb 14 01:46:29.168211 systemd-networkd[1729]: enP1p1s0f0np0: DHCPv4 address 147.75.62.106/30, gateway 147.75.62.105 acquired from 147.28.144.140 Feb 14 01:46:29.342041 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #4 Feb 14 01:46:29.342482 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41308->[::1]:53: read: connection refused Feb 14 01:46:29.544347 systemd-networkd[1729]: enP1p1s0f1np1: Gained carrier Feb 14 01:46:30.152411 systemd-networkd[1729]: enP1p1s0f0np0: Gained IPv6LL Feb 14 01:46:30.943787 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #5 Feb 14 01:46:30.944468 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38373->[::1]:53: read: connection refused Feb 14 01:46:31.304384 systemd-networkd[1729]: enP1p1s0f1np1: Gained IPv6LL Feb 14 01:46:34.147319 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #6 Feb 14 01:46:35.198798 ignition[1735]: GET result: OK Feb 14 01:46:35.488506 ignition[1735]: Ignition finished successfully Feb 14 01:46:35.492276 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 01:46:35.508295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 01:46:35.523868 ignition[1754]: Ignition 2.19.0 Feb 14 01:46:35.523875 ignition[1754]: Stage: disks Feb 14 01:46:35.524073 ignition[1754]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:35.524082 ignition[1754]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:35.525367 ignition[1754]: disks: disks passed Feb 14 01:46:35.525372 ignition[1754]: POST message to Packet Timeline Feb 14 01:46:35.525386 ignition[1754]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:36.308811 ignition[1754]: GET result: OK Feb 14 01:46:36.632311 ignition[1754]: Ignition finished successfully Feb 14 01:46:36.635700 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 01:46:36.641229 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 01:46:36.648879 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 01:46:36.656970 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:46:36.665653 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:46:36.674725 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:46:36.693325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 01:46:36.708639 systemd-fsck[1777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 14 01:46:36.712643 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 01:46:36.729255 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 01:46:36.794011 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 01:46:36.799004 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 14 01:46:36.804302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 01:46:36.829240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:46:36.921531 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1787) Feb 14 01:46:36.921549 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:36.921560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:36.921570 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 01:46:36.921583 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 01:46:36.921593 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 01:46:36.835328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 01:46:36.931694 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 14 01:46:36.938464 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 14 01:46:36.955574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 01:46:36.955603 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:46:36.988023 coreos-metadata[1808]: Feb 14 01:46:36.985 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 01:46:37.004648 coreos-metadata[1807]: Feb 14 01:46:36.985 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 01:46:36.968700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:46:36.982660 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 01:46:37.002394 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 01:46:37.037504 initrd-setup-root[1826]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 01:46:37.043567 initrd-setup-root[1833]: cut: /sysroot/etc/group: No such file or directory Feb 14 01:46:37.049960 initrd-setup-root[1841]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 01:46:37.056156 initrd-setup-root[1849]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 01:46:37.124832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 01:46:37.146255 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 01:46:37.176877 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:37.152663 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 01:46:37.183193 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 01:46:37.198618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 01:46:37.204142 coreos-metadata[1807]: Feb 14 01:46:37.199 INFO Fetch successful Feb 14 01:46:37.215019 ignition[1922]: INFO : Ignition 2.19.0 Feb 14 01:46:37.215019 ignition[1922]: INFO : Stage: mount Feb 14 01:46:37.215019 ignition[1922]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:37.215019 ignition[1922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:37.215019 ignition[1922]: INFO : mount: mount passed Feb 14 01:46:37.215019 ignition[1922]: INFO : POST message to Packet Timeline Feb 14 01:46:37.215019 ignition[1922]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:37.259582 coreos-metadata[1807]: Feb 14 01:46:37.243 INFO wrote hostname ci-4081.3.1-a-385c1ddb28 to /sysroot/etc/hostname Feb 14 01:46:37.246338 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 01:46:38.174396 ignition[1922]: INFO : GET result: OK Feb 14 01:46:38.253535 coreos-metadata[1808]: Feb 14 01:46:38.253 INFO Fetch successful Feb 14 01:46:38.300148 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 14 01:46:38.300241 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 14 01:46:38.556939 ignition[1922]: INFO : Ignition finished successfully Feb 14 01:46:38.559076 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 01:46:38.577292 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 01:46:38.589782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:46:38.625760 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1948) Feb 14 01:46:38.625796 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:38.640183 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:38.653244 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 01:46:38.676250 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 01:46:38.676272 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 01:46:38.684409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:46:38.716389 ignition[1966]: INFO : Ignition 2.19.0 Feb 14 01:46:38.716389 ignition[1966]: INFO : Stage: files Feb 14 01:46:38.725974 ignition[1966]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:38.725974 ignition[1966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:38.725974 ignition[1966]: DEBUG : files: compiled without relabeling support, skipping Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 01:46:38.725974 ignition[1966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 14 01:46:38.721458 unknown[1966]: wrote ssh authorized keys file for user: core Feb 14 01:46:38.881521 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 01:46:38.979084 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 14 01:46:39.189477 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 01:46:39.579458 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:39.579458 ignition[1966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: files passed Feb 14 01:46:39.603970 ignition[1966]: INFO : POST message to Packet Timeline Feb 14 01:46:39.603970 ignition[1966]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:41.098671 ignition[1966]: INFO : GET result: OK Feb 14 01:46:41.447582 ignition[1966]: INFO : Ignition finished successfully Feb 14 01:46:41.451347 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 01:46:41.465305 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 01:46:41.472163 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 01:46:41.484012 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 01:46:41.484088 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 01:46:41.519331 initrd-setup-root-after-ignition[2012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:46:41.519331 initrd-setup-root-after-ignition[2012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:46:41.502271 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:46:41.565647 initrd-setup-root-after-ignition[2016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:46:41.515106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 01:46:41.541364 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 01:46:41.579637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 01:46:41.579710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 01:46:41.589833 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 01:46:41.605852 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 01:46:41.617274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 01:46:41.627337 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 01:46:41.650290 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:46:41.680361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 01:46:41.702374 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:46:41.708291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:46:41.719876 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 01:46:41.731413 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 01:46:41.731511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:46:41.743119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 01:46:41.754400 systemd[1]: Stopped target basic.target - Basic System. Feb 14 01:46:41.765810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 01:46:41.777221 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:46:41.788431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 01:46:41.799693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 01:46:41.810936 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:46:41.822212 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 01:46:41.833461 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 01:46:41.850175 systemd[1]: Stopped target swap.target - Swaps. Feb 14 01:46:41.861543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 01:46:41.861634 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:46:41.873103 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:46:41.884227 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:46:41.895212 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 01:46:41.896233 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:46:41.906309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 01:46:41.906400 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 01:46:41.917642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 01:46:41.917729 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:46:41.928777 systemd[1]: Stopped target paths.target - Path Units. Feb 14 01:46:41.939782 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 01:46:41.944203 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:46:41.956659 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 01:46:41.967937 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 01:46:41.979495 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 01:46:41.979600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:46:42.084656 ignition[2041]: INFO : Ignition 2.19.0 Feb 14 01:46:42.084656 ignition[2041]: INFO : Stage: umount Feb 14 01:46:42.084656 ignition[2041]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:42.084656 ignition[2041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:42.084656 ignition[2041]: INFO : umount: umount passed Feb 14 01:46:42.084656 ignition[2041]: INFO : POST message to Packet Timeline Feb 14 01:46:42.084656 ignition[2041]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:41.990913 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 01:46:41.991010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:46:42.002460 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 01:46:42.002545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:46:42.013898 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 01:46:42.013975 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 01:46:42.025326 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 14 01:46:42.025405 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 01:46:42.048301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 01:46:42.055007 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 01:46:42.066969 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 01:46:42.067070 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:46:42.078971 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 01:46:42.079054 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:46:42.092548 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 01:46:42.093978 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 01:46:42.094064 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 01:46:42.129423 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 01:46:42.129529 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 01:46:43.084973 ignition[2041]: INFO : GET result: OK Feb 14 01:46:43.390042 ignition[2041]: INFO : Ignition finished successfully Feb 14 01:46:43.392714 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 01:46:43.392924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 01:46:43.400088 systemd[1]: Stopped target network.target - Network. Feb 14 01:46:43.409002 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 01:46:43.409057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 01:46:43.418541 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 01:46:43.418573 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 01:46:43.427975 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 01:46:43.428021 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 01:46:43.437552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 01:46:43.437596 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 01:46:43.447234 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 01:46:43.447260 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 01:46:43.457085 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 01:46:43.465205 systemd-networkd[1729]: enP1p1s0f1np1: DHCPv6 lease lost Feb 14 01:46:43.466558 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 01:46:43.476231 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 01:46:43.476341 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 01:46:43.477289 systemd-networkd[1729]: enP1p1s0f0np0: DHCPv6 lease lost Feb 14 01:46:43.488063 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 01:46:43.488174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:46:43.496398 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 01:46:43.496581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 01:46:43.506641 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 01:46:43.506838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:46:43.526322 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 01:46:43.535222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 01:46:43.535285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:46:43.545317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 01:46:43.545349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:46:43.555191 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 01:46:43.555219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 01:46:43.565471 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:46:43.587547 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 01:46:43.587674 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:46:43.598748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 01:46:43.598871 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 01:46:43.607896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 01:46:43.607947 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:46:43.618606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 01:46:43.618649 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:46:43.629557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 01:46:43.629595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 01:46:43.640195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:46:43.640241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:43.663286 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 01:46:43.673229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 01:46:43.673292 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:46:43.684350 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 14 01:46:43.684395 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:46:43.695421 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 01:46:43.695449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:46:43.706749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:46:43.706778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:43.718650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 01:46:43.718721 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 01:46:44.217633 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 01:46:44.217757 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 01:46:44.230354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 01:46:44.252327 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 01:46:44.262140 systemd[1]: Switching root. Feb 14 01:46:44.316287 systemd-journald[901]: Journal stopped Feb 14 01:46:24.169486 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Feb 14 01:46:24.169509 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 14 01:46:24.169517 kernel: KASLR enabled Feb 14 01:46:24.169523 kernel: efi: EFI v2.7 by American Megatrends Feb 14 01:46:24.169529 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf00018 MEMRESERVE=0xe45e8f98 Feb 14 01:46:24.169535 kernel: random: crng init done Feb 14 01:46:24.169543 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. Feb 14 01:46:24.169549 kernel: ACPI: Early table checksum verification disabled Feb 14 01:46:24.169557 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Feb 14 01:46:24.169563 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Feb 14 01:46:24.169569 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Feb 14 01:46:24.169576 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Feb 14 01:46:24.169582 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Feb 14 01:46:24.169588 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Feb 14 01:46:24.169597 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Feb 14 01:46:24.169603 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 01:46:24.169610 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Feb 14 01:46:24.169617 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 01:46:24.169623 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Feb 14 01:46:24.169629 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169636 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169643 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169649 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Feb 14 01:46:24.169657 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Feb 14 01:46:24.169664 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Feb 14 01:46:24.169670 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 01:46:24.169677 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Feb 14 01:46:24.169684 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Feb 14 01:46:24.169690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Feb 14 01:46:24.169697 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Feb 14 01:46:24.169703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Feb 14 01:46:24.169710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Feb 14 01:46:24.169716 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] Feb 14 01:46:24.169723 kernel: Zone ranges: Feb 14 01:46:24.169729 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Feb 14 01:46:24.169737 kernel: DMA32 empty Feb 14 01:46:24.169744 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Feb 14 01:46:24.169750 kernel: Movable zone start for each node Feb 14 01:46:24.169757 kernel: Early memory node ranges Feb 14 01:46:24.169763 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Feb 14 01:46:24.169773 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Feb 14 01:46:24.169779 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Feb 14 01:46:24.169788 kernel: node 0: [mem 0x0000000094000000-0x00000000eba37fff] Feb 14 01:46:24.169794 kernel: node 0: [mem 0x00000000eba38000-0x00000000ebeccfff] Feb 14 01:46:24.169801 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Feb 14 01:46:24.169808 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Feb 14 01:46:24.169815 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Feb 14 01:46:24.169822 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Feb 14 01:46:24.169828 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] Feb 14 01:46:24.169835 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] Feb 14 01:46:24.169842 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Feb 14 01:46:24.169849 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Feb 14 01:46:24.169857 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Feb 14 01:46:24.169864 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Feb 14 01:46:24.169871 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Feb 14 01:46:24.169877 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Feb 14 01:46:24.169884 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Feb 14 01:46:24.169891 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Feb 14 01:46:24.169898 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Feb 14 01:46:24.169905 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Feb 14 01:46:24.169912 kernel: psci: probing for conduit method from ACPI. Feb 14 01:46:24.169919 kernel: psci: PSCIv1.1 detected in firmware. Feb 14 01:46:24.169926 kernel: psci: Using standard PSCI v0.2 function IDs Feb 14 01:46:24.169934 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 14 01:46:24.169941 kernel: psci: SMC Calling Convention v1.2 Feb 14 01:46:24.169948 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 14 01:46:24.169955 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Feb 14 01:46:24.169962 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Feb 14 01:46:24.169969 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Feb 14 01:46:24.169976 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Feb 14 01:46:24.169982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Feb 14 01:46:24.169989 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Feb 14 01:46:24.169996 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Feb 14 01:46:24.170003 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Feb 14 01:46:24.170010 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Feb 14 01:46:24.170018 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Feb 14 01:46:24.170025 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Feb 14 01:46:24.170032 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Feb 14 01:46:24.170039 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Feb 14 01:46:24.170045 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Feb 14 01:46:24.170052 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Feb 14 01:46:24.170059 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Feb 14 01:46:24.170066 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Feb 14 01:46:24.170073 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Feb 14 01:46:24.170080 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Feb 14 01:46:24.170087 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Feb 14 01:46:24.170093 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Feb 14 01:46:24.170102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Feb 14 01:46:24.170108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Feb 14 01:46:24.170115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Feb 14 01:46:24.170122 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Feb 14 01:46:24.170129 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Feb 14 01:46:24.170136 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Feb 14 01:46:24.170143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Feb 14 01:46:24.170149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Feb 14 01:46:24.170156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Feb 14 01:46:24.170163 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Feb 14 01:46:24.170170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Feb 14 01:46:24.170182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Feb 14 01:46:24.170190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Feb 14 01:46:24.170196 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Feb 14 01:46:24.170203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Feb 14 01:46:24.170210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Feb 14 01:46:24.170217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Feb 14 01:46:24.170224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Feb 14 01:46:24.170230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Feb 14 01:46:24.170237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Feb 14 01:46:24.170244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Feb 14 01:46:24.170251 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Feb 14 01:46:24.170258 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Feb 14 01:46:24.170266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Feb 14 01:46:24.170273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Feb 14 01:46:24.170280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Feb 14 01:46:24.170287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Feb 14 01:46:24.170294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Feb 14 01:46:24.170301 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Feb 14 01:46:24.170307 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Feb 14 01:46:24.170314 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Feb 14 01:46:24.170328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Feb 14 01:46:24.170335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Feb 14 01:46:24.170344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Feb 14 01:46:24.170351 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Feb 14 01:46:24.170358 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Feb 14 01:46:24.170366 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Feb 14 01:46:24.170373 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Feb 14 01:46:24.170380 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Feb 14 01:46:24.170389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Feb 14 01:46:24.170396 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Feb 14 01:46:24.170403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Feb 14 01:46:24.170411 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Feb 14 01:46:24.170418 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Feb 14 01:46:24.170425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Feb 14 01:46:24.170432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Feb 14 01:46:24.170440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Feb 14 01:46:24.170447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Feb 14 01:46:24.170454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Feb 14 01:46:24.170461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Feb 14 01:46:24.170468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Feb 14 01:46:24.170477 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Feb 14 01:46:24.170484 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Feb 14 01:46:24.170492 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Feb 14 01:46:24.170499 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Feb 14 01:46:24.170506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Feb 14 01:46:24.170513 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Feb 14 01:46:24.170520 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Feb 14 01:46:24.170527 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 14 01:46:24.170535 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 14 01:46:24.170542 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Feb 14 01:46:24.170549 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Feb 14 01:46:24.170558 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Feb 14 01:46:24.170565 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Feb 14 01:46:24.170573 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Feb 14 01:46:24.170580 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Feb 14 01:46:24.170587 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Feb 14 01:46:24.170594 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Feb 14 01:46:24.170601 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Feb 14 01:46:24.170608 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Feb 14 01:46:24.170615 kernel: Detected PIPT I-cache on CPU0 Feb 14 01:46:24.170622 kernel: CPU features: detected: GIC system register CPU interface Feb 14 01:46:24.170630 kernel: CPU features: detected: Virtualization Host Extensions Feb 14 01:46:24.170639 kernel: CPU features: detected: Hardware dirty bit management Feb 14 01:46:24.170646 kernel: CPU features: detected: Spectre-v4 Feb 14 01:46:24.170653 kernel: CPU features: detected: Spectre-BHB Feb 14 01:46:24.170661 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 14 01:46:24.170668 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 14 01:46:24.170675 kernel: CPU features: detected: ARM erratum 1418040 Feb 14 01:46:24.170682 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 14 01:46:24.170690 kernel: alternatives: applying boot alternatives Feb 14 01:46:24.170699 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 01:46:24.170706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 01:46:24.170715 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 14 01:46:24.170722 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Feb 14 01:46:24.170729 kernel: printk: log_buf_len min size: 262144 bytes Feb 14 01:46:24.170737 kernel: printk: log_buf_len: 1048576 bytes Feb 14 01:46:24.170744 kernel: printk: early log buf free: 250032(95%) Feb 14 01:46:24.170751 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Feb 14 01:46:24.170758 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Feb 14 01:46:24.170766 kernel: Fallback order for Node 0: 0 Feb 14 01:46:24.170773 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Feb 14 01:46:24.170780 kernel: Policy zone: Normal Feb 14 01:46:24.170787 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 01:46:24.170794 kernel: software IO TLB: area num 128. Feb 14 01:46:24.170803 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Feb 14 01:46:24.170810 kernel: Memory: 262922520K/268174336K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 5251816K reserved, 0K cma-reserved) Feb 14 01:46:24.170818 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Feb 14 01:46:24.170825 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 01:46:24.170833 kernel: rcu: RCU event tracing is enabled. Feb 14 01:46:24.170840 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Feb 14 01:46:24.170848 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 01:46:24.170855 kernel: Tracing variant of Tasks RCU enabled. Feb 14 01:46:24.170863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 01:46:24.170870 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Feb 14 01:46:24.170877 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 14 01:46:24.170886 kernel: GICv3: GIC: Using split EOI/Deactivate mode Feb 14 01:46:24.170893 kernel: GICv3: 672 SPIs implemented Feb 14 01:46:24.170901 kernel: GICv3: 0 Extended SPIs implemented Feb 14 01:46:24.170908 kernel: Root IRQ handler: gic_handle_irq Feb 14 01:46:24.170915 kernel: GICv3: GICv3 features: 16 PPIs Feb 14 01:46:24.170922 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Feb 14 01:46:24.170930 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Feb 14 01:46:24.170937 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Feb 14 01:46:24.170944 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Feb 14 01:46:24.170951 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Feb 14 01:46:24.170958 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Feb 14 01:46:24.170965 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Feb 14 01:46:24.170973 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Feb 14 01:46:24.170981 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Feb 14 01:46:24.170988 kernel: ITS [mem 0x100100040000-0x10010005ffff] Feb 14 01:46:24.170996 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171003 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171011 kernel: ITS [mem 0x100100060000-0x10010007ffff] Feb 14 01:46:24.171018 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171026 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171033 kernel: ITS [mem 0x100100080000-0x10010009ffff] Feb 14 01:46:24.171041 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171048 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171055 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Feb 14 01:46:24.171064 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171072 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171079 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Feb 14 01:46:24.171086 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171094 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171101 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Feb 14 01:46:24.171109 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171116 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171123 kernel: ITS [mem 0x100100100000-0x10010011ffff] Feb 14 01:46:24.171131 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171138 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171147 kernel: ITS [mem 0x100100120000-0x10010013ffff] Feb 14 01:46:24.171154 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 01:46:24.171162 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Feb 14 01:46:24.171169 kernel: GICv3: using LPI property table @0x00000800003e0000 Feb 14 01:46:24.171176 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Feb 14 01:46:24.171185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 01:46:24.171193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171200 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Feb 14 01:46:24.171208 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Feb 14 01:46:24.171215 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 14 01:46:24.171223 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 14 01:46:24.171232 kernel: Console: colour dummy device 80x25 Feb 14 01:46:24.171239 kernel: printk: console [tty0] enabled Feb 14 01:46:24.171247 kernel: ACPI: Core revision 20230628 Feb 14 01:46:24.171254 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 14 01:46:24.171262 kernel: pid_max: default: 81920 minimum: 640 Feb 14 01:46:24.171269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 01:46:24.171277 kernel: landlock: Up and running. Feb 14 01:46:24.171284 kernel: SELinux: Initializing. Feb 14 01:46:24.171292 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.171300 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.171309 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 01:46:24.171316 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 01:46:24.171324 kernel: rcu: Hierarchical SRCU implementation. Feb 14 01:46:24.171331 kernel: rcu: Max phase no-delay instances is 400. Feb 14 01:46:24.171339 kernel: Platform MSI: ITS@0x100100040000 domain created Feb 14 01:46:24.171346 kernel: Platform MSI: ITS@0x100100060000 domain created Feb 14 01:46:24.171354 kernel: Platform MSI: ITS@0x100100080000 domain created Feb 14 01:46:24.171361 kernel: Platform MSI: ITS@0x1001000a0000 domain created Feb 14 01:46:24.171370 kernel: Platform MSI: ITS@0x1001000c0000 domain created Feb 14 01:46:24.171377 kernel: Platform MSI: ITS@0x1001000e0000 domain created Feb 14 01:46:24.171385 kernel: Platform MSI: ITS@0x100100100000 domain created Feb 14 01:46:24.171392 kernel: Platform MSI: ITS@0x100100120000 domain created Feb 14 01:46:24.171399 kernel: PCI/MSI: ITS@0x100100040000 domain created Feb 14 01:46:24.171407 kernel: PCI/MSI: ITS@0x100100060000 domain created Feb 14 01:46:24.171414 kernel: PCI/MSI: ITS@0x100100080000 domain created Feb 14 01:46:24.171422 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Feb 14 01:46:24.171429 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Feb 14 01:46:24.171437 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Feb 14 01:46:24.171445 kernel: PCI/MSI: ITS@0x100100100000 domain created Feb 14 01:46:24.171453 kernel: PCI/MSI: ITS@0x100100120000 domain created Feb 14 01:46:24.171460 kernel: Remapping and enabling EFI services. Feb 14 01:46:24.171467 kernel: smp: Bringing up secondary CPUs ... Feb 14 01:46:24.171475 kernel: Detected PIPT I-cache on CPU1 Feb 14 01:46:24.171483 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Feb 14 01:46:24.171490 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Feb 14 01:46:24.171498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171505 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Feb 14 01:46:24.171515 kernel: Detected PIPT I-cache on CPU2 Feb 14 01:46:24.171523 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Feb 14 01:46:24.171530 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Feb 14 01:46:24.171538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171545 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Feb 14 01:46:24.171552 kernel: Detected PIPT I-cache on CPU3 Feb 14 01:46:24.171560 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Feb 14 01:46:24.171567 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Feb 14 01:46:24.171575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171582 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Feb 14 01:46:24.171591 kernel: Detected PIPT I-cache on CPU4 Feb 14 01:46:24.171598 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Feb 14 01:46:24.171606 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Feb 14 01:46:24.171613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171620 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Feb 14 01:46:24.171627 kernel: Detected PIPT I-cache on CPU5 Feb 14 01:46:24.171635 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Feb 14 01:46:24.171642 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Feb 14 01:46:24.171650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171658 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Feb 14 01:46:24.171666 kernel: Detected PIPT I-cache on CPU6 Feb 14 01:46:24.171674 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Feb 14 01:46:24.171681 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Feb 14 01:46:24.171688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171696 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Feb 14 01:46:24.171703 kernel: Detected PIPT I-cache on CPU7 Feb 14 01:46:24.171710 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Feb 14 01:46:24.171718 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Feb 14 01:46:24.171727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171734 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Feb 14 01:46:24.171741 kernel: Detected PIPT I-cache on CPU8 Feb 14 01:46:24.171749 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Feb 14 01:46:24.171756 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Feb 14 01:46:24.171764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171771 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Feb 14 01:46:24.171778 kernel: Detected PIPT I-cache on CPU9 Feb 14 01:46:24.171786 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Feb 14 01:46:24.171793 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Feb 14 01:46:24.171802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171810 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Feb 14 01:46:24.171817 kernel: Detected PIPT I-cache on CPU10 Feb 14 01:46:24.171825 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Feb 14 01:46:24.171832 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Feb 14 01:46:24.171839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171847 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Feb 14 01:46:24.171854 kernel: Detected PIPT I-cache on CPU11 Feb 14 01:46:24.171862 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Feb 14 01:46:24.171869 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Feb 14 01:46:24.171878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171886 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Feb 14 01:46:24.171893 kernel: Detected PIPT I-cache on CPU12 Feb 14 01:46:24.171900 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Feb 14 01:46:24.171908 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Feb 14 01:46:24.171915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171922 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Feb 14 01:46:24.171930 kernel: Detected PIPT I-cache on CPU13 Feb 14 01:46:24.171937 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Feb 14 01:46:24.171946 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Feb 14 01:46:24.171953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171961 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Feb 14 01:46:24.171968 kernel: Detected PIPT I-cache on CPU14 Feb 14 01:46:24.171976 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Feb 14 01:46:24.171983 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Feb 14 01:46:24.171991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.171998 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Feb 14 01:46:24.172005 kernel: Detected PIPT I-cache on CPU15 Feb 14 01:46:24.172014 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Feb 14 01:46:24.172022 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Feb 14 01:46:24.172029 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172037 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Feb 14 01:46:24.172044 kernel: Detected PIPT I-cache on CPU16 Feb 14 01:46:24.172052 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Feb 14 01:46:24.172059 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Feb 14 01:46:24.172066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172074 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Feb 14 01:46:24.172081 kernel: Detected PIPT I-cache on CPU17 Feb 14 01:46:24.172098 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Feb 14 01:46:24.172107 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Feb 14 01:46:24.172115 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172123 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Feb 14 01:46:24.172130 kernel: Detected PIPT I-cache on CPU18 Feb 14 01:46:24.172138 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Feb 14 01:46:24.172146 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Feb 14 01:46:24.172154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172161 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Feb 14 01:46:24.172171 kernel: Detected PIPT I-cache on CPU19 Feb 14 01:46:24.172180 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Feb 14 01:46:24.172189 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Feb 14 01:46:24.172196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172204 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Feb 14 01:46:24.172213 kernel: Detected PIPT I-cache on CPU20 Feb 14 01:46:24.172221 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Feb 14 01:46:24.172231 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Feb 14 01:46:24.172239 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172246 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Feb 14 01:46:24.172254 kernel: Detected PIPT I-cache on CPU21 Feb 14 01:46:24.172263 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Feb 14 01:46:24.172271 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Feb 14 01:46:24.172279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172287 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Feb 14 01:46:24.172296 kernel: Detected PIPT I-cache on CPU22 Feb 14 01:46:24.172304 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Feb 14 01:46:24.172311 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Feb 14 01:46:24.172319 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172327 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Feb 14 01:46:24.172335 kernel: Detected PIPT I-cache on CPU23 Feb 14 01:46:24.172342 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Feb 14 01:46:24.172350 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Feb 14 01:46:24.172358 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172366 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Feb 14 01:46:24.172375 kernel: Detected PIPT I-cache on CPU24 Feb 14 01:46:24.172383 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Feb 14 01:46:24.172391 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Feb 14 01:46:24.172398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172406 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Feb 14 01:46:24.172414 kernel: Detected PIPT I-cache on CPU25 Feb 14 01:46:24.172422 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Feb 14 01:46:24.172429 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Feb 14 01:46:24.172437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172446 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Feb 14 01:46:24.172454 kernel: Detected PIPT I-cache on CPU26 Feb 14 01:46:24.172462 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Feb 14 01:46:24.172470 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Feb 14 01:46:24.172477 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172485 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Feb 14 01:46:24.172493 kernel: Detected PIPT I-cache on CPU27 Feb 14 01:46:24.172501 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Feb 14 01:46:24.172509 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Feb 14 01:46:24.172517 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172526 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Feb 14 01:46:24.172533 kernel: Detected PIPT I-cache on CPU28 Feb 14 01:46:24.172541 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Feb 14 01:46:24.172549 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Feb 14 01:46:24.172557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172564 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Feb 14 01:46:24.172572 kernel: Detected PIPT I-cache on CPU29 Feb 14 01:46:24.172580 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Feb 14 01:46:24.172588 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Feb 14 01:46:24.172597 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172605 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Feb 14 01:46:24.172613 kernel: Detected PIPT I-cache on CPU30 Feb 14 01:46:24.172620 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Feb 14 01:46:24.172628 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Feb 14 01:46:24.172636 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172644 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Feb 14 01:46:24.172651 kernel: Detected PIPT I-cache on CPU31 Feb 14 01:46:24.172659 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Feb 14 01:46:24.172667 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Feb 14 01:46:24.172676 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172684 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Feb 14 01:46:24.172692 kernel: Detected PIPT I-cache on CPU32 Feb 14 01:46:24.172700 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Feb 14 01:46:24.172707 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Feb 14 01:46:24.172715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172723 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Feb 14 01:46:24.172731 kernel: Detected PIPT I-cache on CPU33 Feb 14 01:46:24.172738 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Feb 14 01:46:24.172748 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Feb 14 01:46:24.172756 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172763 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Feb 14 01:46:24.172771 kernel: Detected PIPT I-cache on CPU34 Feb 14 01:46:24.172779 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Feb 14 01:46:24.172787 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Feb 14 01:46:24.172796 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172803 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Feb 14 01:46:24.172811 kernel: Detected PIPT I-cache on CPU35 Feb 14 01:46:24.172819 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Feb 14 01:46:24.172828 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Feb 14 01:46:24.172836 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172844 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Feb 14 01:46:24.172852 kernel: Detected PIPT I-cache on CPU36 Feb 14 01:46:24.172859 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Feb 14 01:46:24.172867 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Feb 14 01:46:24.172875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172883 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Feb 14 01:46:24.172891 kernel: Detected PIPT I-cache on CPU37 Feb 14 01:46:24.172900 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Feb 14 01:46:24.172908 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Feb 14 01:46:24.172916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172923 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Feb 14 01:46:24.172931 kernel: Detected PIPT I-cache on CPU38 Feb 14 01:46:24.172939 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Feb 14 01:46:24.172946 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Feb 14 01:46:24.172954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.172962 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Feb 14 01:46:24.172969 kernel: Detected PIPT I-cache on CPU39 Feb 14 01:46:24.172979 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Feb 14 01:46:24.172986 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Feb 14 01:46:24.172994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173002 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Feb 14 01:46:24.173010 kernel: Detected PIPT I-cache on CPU40 Feb 14 01:46:24.173017 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Feb 14 01:46:24.173025 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Feb 14 01:46:24.173034 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173042 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Feb 14 01:46:24.173050 kernel: Detected PIPT I-cache on CPU41 Feb 14 01:46:24.173058 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Feb 14 01:46:24.173065 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Feb 14 01:46:24.173073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173081 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Feb 14 01:46:24.173089 kernel: Detected PIPT I-cache on CPU42 Feb 14 01:46:24.173096 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Feb 14 01:46:24.173104 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Feb 14 01:46:24.173113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173121 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Feb 14 01:46:24.173129 kernel: Detected PIPT I-cache on CPU43 Feb 14 01:46:24.173136 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Feb 14 01:46:24.173144 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Feb 14 01:46:24.173152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173159 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Feb 14 01:46:24.173167 kernel: Detected PIPT I-cache on CPU44 Feb 14 01:46:24.173175 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Feb 14 01:46:24.173186 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Feb 14 01:46:24.173194 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173202 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Feb 14 01:46:24.173210 kernel: Detected PIPT I-cache on CPU45 Feb 14 01:46:24.173218 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Feb 14 01:46:24.173226 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Feb 14 01:46:24.173234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173241 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Feb 14 01:46:24.173249 kernel: Detected PIPT I-cache on CPU46 Feb 14 01:46:24.173257 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Feb 14 01:46:24.173268 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Feb 14 01:46:24.173276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173283 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Feb 14 01:46:24.173291 kernel: Detected PIPT I-cache on CPU47 Feb 14 01:46:24.173299 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Feb 14 01:46:24.173307 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Feb 14 01:46:24.173315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173322 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Feb 14 01:46:24.173330 kernel: Detected PIPT I-cache on CPU48 Feb 14 01:46:24.173339 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Feb 14 01:46:24.173347 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Feb 14 01:46:24.173355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173363 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Feb 14 01:46:24.173370 kernel: Detected PIPT I-cache on CPU49 Feb 14 01:46:24.173378 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Feb 14 01:46:24.173386 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Feb 14 01:46:24.173394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173401 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Feb 14 01:46:24.173409 kernel: Detected PIPT I-cache on CPU50 Feb 14 01:46:24.173418 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Feb 14 01:46:24.173426 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Feb 14 01:46:24.173433 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173441 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Feb 14 01:46:24.173449 kernel: Detected PIPT I-cache on CPU51 Feb 14 01:46:24.173456 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Feb 14 01:46:24.173464 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Feb 14 01:46:24.173472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173479 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Feb 14 01:46:24.173489 kernel: Detected PIPT I-cache on CPU52 Feb 14 01:46:24.173496 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Feb 14 01:46:24.173504 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Feb 14 01:46:24.173512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173519 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Feb 14 01:46:24.173527 kernel: Detected PIPT I-cache on CPU53 Feb 14 01:46:24.173536 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Feb 14 01:46:24.173544 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Feb 14 01:46:24.173552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173560 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Feb 14 01:46:24.173569 kernel: Detected PIPT I-cache on CPU54 Feb 14 01:46:24.173576 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Feb 14 01:46:24.173585 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Feb 14 01:46:24.173592 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173600 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Feb 14 01:46:24.173608 kernel: Detected PIPT I-cache on CPU55 Feb 14 01:46:24.173615 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Feb 14 01:46:24.173623 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Feb 14 01:46:24.173631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173640 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Feb 14 01:46:24.173648 kernel: Detected PIPT I-cache on CPU56 Feb 14 01:46:24.173656 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Feb 14 01:46:24.173663 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Feb 14 01:46:24.173671 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173679 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Feb 14 01:46:24.173687 kernel: Detected PIPT I-cache on CPU57 Feb 14 01:46:24.173695 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Feb 14 01:46:24.173702 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Feb 14 01:46:24.173712 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173719 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Feb 14 01:46:24.173727 kernel: Detected PIPT I-cache on CPU58 Feb 14 01:46:24.173735 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Feb 14 01:46:24.173743 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Feb 14 01:46:24.173751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173758 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Feb 14 01:46:24.173766 kernel: Detected PIPT I-cache on CPU59 Feb 14 01:46:24.173774 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Feb 14 01:46:24.173781 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Feb 14 01:46:24.173790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173798 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Feb 14 01:46:24.173806 kernel: Detected PIPT I-cache on CPU60 Feb 14 01:46:24.173814 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Feb 14 01:46:24.173822 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Feb 14 01:46:24.173829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173837 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Feb 14 01:46:24.173845 kernel: Detected PIPT I-cache on CPU61 Feb 14 01:46:24.173853 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Feb 14 01:46:24.173862 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Feb 14 01:46:24.173870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173878 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Feb 14 01:46:24.173885 kernel: Detected PIPT I-cache on CPU62 Feb 14 01:46:24.173893 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Feb 14 01:46:24.173901 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Feb 14 01:46:24.173908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173916 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Feb 14 01:46:24.173924 kernel: Detected PIPT I-cache on CPU63 Feb 14 01:46:24.173932 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Feb 14 01:46:24.173941 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Feb 14 01:46:24.173949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173957 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Feb 14 01:46:24.173964 kernel: Detected PIPT I-cache on CPU64 Feb 14 01:46:24.173972 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Feb 14 01:46:24.173980 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Feb 14 01:46:24.173988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.173995 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Feb 14 01:46:24.174003 kernel: Detected PIPT I-cache on CPU65 Feb 14 01:46:24.174012 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Feb 14 01:46:24.174020 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Feb 14 01:46:24.174028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174036 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Feb 14 01:46:24.174043 kernel: Detected PIPT I-cache on CPU66 Feb 14 01:46:24.174051 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Feb 14 01:46:24.174059 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Feb 14 01:46:24.174067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174074 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Feb 14 01:46:24.174082 kernel: Detected PIPT I-cache on CPU67 Feb 14 01:46:24.174092 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Feb 14 01:46:24.174100 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Feb 14 01:46:24.174107 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174115 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Feb 14 01:46:24.174123 kernel: Detected PIPT I-cache on CPU68 Feb 14 01:46:24.174131 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Feb 14 01:46:24.174138 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Feb 14 01:46:24.174146 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174154 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Feb 14 01:46:24.174163 kernel: Detected PIPT I-cache on CPU69 Feb 14 01:46:24.174171 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Feb 14 01:46:24.174181 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Feb 14 01:46:24.174189 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174197 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Feb 14 01:46:24.174205 kernel: Detected PIPT I-cache on CPU70 Feb 14 01:46:24.174213 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Feb 14 01:46:24.174221 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Feb 14 01:46:24.174228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174236 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Feb 14 01:46:24.174245 kernel: Detected PIPT I-cache on CPU71 Feb 14 01:46:24.174253 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Feb 14 01:46:24.174261 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Feb 14 01:46:24.174269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174276 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Feb 14 01:46:24.174284 kernel: Detected PIPT I-cache on CPU72 Feb 14 01:46:24.174292 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Feb 14 01:46:24.174300 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Feb 14 01:46:24.174307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174316 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Feb 14 01:46:24.174324 kernel: Detected PIPT I-cache on CPU73 Feb 14 01:46:24.174332 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Feb 14 01:46:24.174339 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Feb 14 01:46:24.174347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174355 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Feb 14 01:46:24.174362 kernel: Detected PIPT I-cache on CPU74 Feb 14 01:46:24.174370 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Feb 14 01:46:24.174378 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Feb 14 01:46:24.174387 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174395 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Feb 14 01:46:24.174403 kernel: Detected PIPT I-cache on CPU75 Feb 14 01:46:24.174410 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Feb 14 01:46:24.174418 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Feb 14 01:46:24.174426 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174434 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Feb 14 01:46:24.174442 kernel: Detected PIPT I-cache on CPU76 Feb 14 01:46:24.174449 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Feb 14 01:46:24.174457 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Feb 14 01:46:24.174467 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174475 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Feb 14 01:46:24.174482 kernel: Detected PIPT I-cache on CPU77 Feb 14 01:46:24.174490 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Feb 14 01:46:24.174498 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Feb 14 01:46:24.174506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174513 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Feb 14 01:46:24.174521 kernel: Detected PIPT I-cache on CPU78 Feb 14 01:46:24.174529 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Feb 14 01:46:24.174538 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Feb 14 01:46:24.174546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174554 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Feb 14 01:46:24.174561 kernel: Detected PIPT I-cache on CPU79 Feb 14 01:46:24.174569 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Feb 14 01:46:24.174577 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Feb 14 01:46:24.174585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 01:46:24.174593 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Feb 14 01:46:24.174600 kernel: smp: Brought up 1 node, 80 CPUs Feb 14 01:46:24.174608 kernel: SMP: Total of 80 processors activated. Feb 14 01:46:24.174617 kernel: CPU features: detected: 32-bit EL0 Support Feb 14 01:46:24.174625 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 14 01:46:24.174633 kernel: CPU features: detected: Common not Private translations Feb 14 01:46:24.174640 kernel: CPU features: detected: CRC32 instructions Feb 14 01:46:24.174648 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 14 01:46:24.174656 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 14 01:46:24.174664 kernel: CPU features: detected: LSE atomic instructions Feb 14 01:46:24.174672 kernel: CPU features: detected: Privileged Access Never Feb 14 01:46:24.174679 kernel: CPU features: detected: RAS Extension Support Feb 14 01:46:24.174689 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 14 01:46:24.174696 kernel: CPU: All CPU(s) started at EL2 Feb 14 01:46:24.174704 kernel: alternatives: applying system-wide alternatives Feb 14 01:46:24.174712 kernel: devtmpfs: initialized Feb 14 01:46:24.174720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 01:46:24.174728 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.174735 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 01:46:24.174743 kernel: SMBIOS 3.4.0 present. Feb 14 01:46:24.174751 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Feb 14 01:46:24.174760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 01:46:24.174768 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Feb 14 01:46:24.174776 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 14 01:46:24.174784 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 14 01:46:24.174791 kernel: audit: initializing netlink subsys (disabled) Feb 14 01:46:24.174799 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Feb 14 01:46:24.174807 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 01:46:24.174815 kernel: cpuidle: using governor menu Feb 14 01:46:24.174822 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 14 01:46:24.174832 kernel: ASID allocator initialised with 32768 entries Feb 14 01:46:24.174840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 01:46:24.174847 kernel: Serial: AMBA PL011 UART driver Feb 14 01:46:24.174855 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 14 01:46:24.174863 kernel: Modules: 0 pages in range for non-PLT usage Feb 14 01:46:24.174870 kernel: Modules: 509040 pages in range for PLT usage Feb 14 01:46:24.174878 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 01:46:24.174886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 01:46:24.174894 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 14 01:46:24.174903 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 14 01:46:24.174911 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 01:46:24.174918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 01:46:24.174926 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 14 01:46:24.174934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 14 01:46:24.174942 kernel: ACPI: Added _OSI(Module Device) Feb 14 01:46:24.174950 kernel: ACPI: Added _OSI(Processor Device) Feb 14 01:46:24.174957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 01:46:24.174965 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 01:46:24.174974 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Feb 14 01:46:24.174982 kernel: ACPI: Interpreter enabled Feb 14 01:46:24.174990 kernel: ACPI: Using GIC for interrupt routing Feb 14 01:46:24.174997 kernel: ACPI: MCFG table detected, 8 entries Feb 14 01:46:24.175005 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175013 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175021 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175029 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175037 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175046 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175053 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175061 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Feb 14 01:46:24.175069 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Feb 14 01:46:24.175077 kernel: printk: console [ttyAMA0] enabled Feb 14 01:46:24.175085 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Feb 14 01:46:24.175093 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Feb 14 01:46:24.175229 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.175308 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.175376 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.175440 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.175503 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.175567 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Feb 14 01:46:24.175578 kernel: PCI host bridge to bus 000d:00 Feb 14 01:46:24.175649 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Feb 14 01:46:24.175712 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Feb 14 01:46:24.175770 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Feb 14 01:46:24.175853 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.175928 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.175996 kernel: pci 000d:00:01.0: enabling Extended Tags Feb 14 01:46:24.176063 kernel: pci 000d:00:01.0: supports D1 D2 Feb 14 01:46:24.176132 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176224 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.176290 kernel: pci 000d:00:02.0: supports D1 D2 Feb 14 01:46:24.176356 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176428 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.176495 kernel: pci 000d:00:03.0: supports D1 D2 Feb 14 01:46:24.176561 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176638 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.176705 kernel: pci 000d:00:04.0: supports D1 D2 Feb 14 01:46:24.176773 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.176784 kernel: acpiphp: Slot [1] registered Feb 14 01:46:24.176792 kernel: acpiphp: Slot [2] registered Feb 14 01:46:24.176799 kernel: acpiphp: Slot [3] registered Feb 14 01:46:24.176807 kernel: acpiphp: Slot [4] registered Feb 14 01:46:24.176867 kernel: pci_bus 000d:00: on NUMA node 0 Feb 14 01:46:24.176935 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.177003 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.177074 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.177143 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.177214 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.177282 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.177353 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.177422 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.177488 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.177556 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.177622 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.177688 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.177755 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Feb 14 01:46:24.177825 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 01:46:24.177890 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Feb 14 01:46:24.177957 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 01:46:24.178023 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Feb 14 01:46:24.178090 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 01:46:24.178156 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Feb 14 01:46:24.178226 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 01:46:24.178294 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178361 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178428 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178494 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178561 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178629 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178696 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178763 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178832 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.178898 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.178964 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.179030 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.179097 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.179163 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.179233 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.179300 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.179365 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.179435 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Feb 14 01:46:24.179500 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 01:46:24.179568 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.179633 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Feb 14 01:46:24.179701 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 01:46:24.179767 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.179836 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Feb 14 01:46:24.179901 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 01:46:24.179969 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.180034 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Feb 14 01:46:24.180101 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 01:46:24.180163 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Feb 14 01:46:24.180226 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Feb 14 01:46:24.180301 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Feb 14 01:46:24.180365 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 01:46:24.180436 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Feb 14 01:46:24.180499 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 01:46:24.180579 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Feb 14 01:46:24.180644 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 01:46:24.180714 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Feb 14 01:46:24.180776 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 01:46:24.180786 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Feb 14 01:46:24.180858 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.180922 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.180987 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.181053 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.181117 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.181184 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Feb 14 01:46:24.181195 kernel: PCI host bridge to bus 0000:00 Feb 14 01:46:24.181262 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Feb 14 01:46:24.181324 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 01:46:24.181383 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 01:46:24.181460 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.181534 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.181602 kernel: pci 0000:00:01.0: enabling Extended Tags Feb 14 01:46:24.181667 kernel: pci 0000:00:01.0: supports D1 D2 Feb 14 01:46:24.181734 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.181808 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.181878 kernel: pci 0000:00:02.0: supports D1 D2 Feb 14 01:46:24.181944 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.182019 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.182085 kernel: pci 0000:00:03.0: supports D1 D2 Feb 14 01:46:24.182153 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.182229 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.182296 kernel: pci 0000:00:04.0: supports D1 D2 Feb 14 01:46:24.182364 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.182375 kernel: acpiphp: Slot [1-1] registered Feb 14 01:46:24.182383 kernel: acpiphp: Slot [2-1] registered Feb 14 01:46:24.182390 kernel: acpiphp: Slot [3-1] registered Feb 14 01:46:24.182398 kernel: acpiphp: Slot [4-1] registered Feb 14 01:46:24.182456 kernel: pci_bus 0000:00: on NUMA node 0 Feb 14 01:46:24.182523 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.182589 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.182656 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.182724 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.182790 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.182856 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.182924 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.182990 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.183057 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.183125 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.183195 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.183261 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.183329 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Feb 14 01:46:24.183395 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 01:46:24.183462 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Feb 14 01:46:24.183527 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 01:46:24.183594 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Feb 14 01:46:24.183664 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 01:46:24.183731 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Feb 14 01:46:24.183798 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 01:46:24.183864 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.183931 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.183996 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184062 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184127 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184200 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184267 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184333 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184399 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184465 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184530 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184597 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184663 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184731 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184798 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.184862 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.184929 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.184995 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Feb 14 01:46:24.185062 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 01:46:24.185127 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.185197 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Feb 14 01:46:24.185266 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 01:46:24.185335 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.185401 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Feb 14 01:46:24.185471 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 01:46:24.185536 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.185602 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Feb 14 01:46:24.185668 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 01:46:24.185730 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Feb 14 01:46:24.185788 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 01:46:24.185862 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Feb 14 01:46:24.185925 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 01:46:24.185994 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Feb 14 01:46:24.186058 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 01:46:24.186135 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Feb 14 01:46:24.186202 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 01:46:24.186272 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Feb 14 01:46:24.186337 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 01:46:24.186348 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Feb 14 01:46:24.186418 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.186484 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.186548 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.186612 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.186679 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.186742 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Feb 14 01:46:24.186753 kernel: PCI host bridge to bus 0005:00 Feb 14 01:46:24.186819 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Feb 14 01:46:24.186881 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 01:46:24.186941 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Feb 14 01:46:24.187016 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.187098 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.187165 kernel: pci 0005:00:01.0: supports D1 D2 Feb 14 01:46:24.187237 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187311 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.187381 kernel: pci 0005:00:03.0: supports D1 D2 Feb 14 01:46:24.187448 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187522 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.187591 kernel: pci 0005:00:05.0: supports D1 D2 Feb 14 01:46:24.187659 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187733 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 01:46:24.187800 kernel: pci 0005:00:07.0: supports D1 D2 Feb 14 01:46:24.187866 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.187877 kernel: acpiphp: Slot [1-2] registered Feb 14 01:46:24.187885 kernel: acpiphp: Slot [2-2] registered Feb 14 01:46:24.187960 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Feb 14 01:46:24.188030 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Feb 14 01:46:24.188098 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Feb 14 01:46:24.188213 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Feb 14 01:46:24.188296 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Feb 14 01:46:24.188369 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Feb 14 01:46:24.188429 kernel: pci_bus 0005:00: on NUMA node 0 Feb 14 01:46:24.188500 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.188566 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.188637 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.188715 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.188782 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.188849 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.188920 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.188988 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.189054 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 01:46:24.189122 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.189193 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.189261 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Feb 14 01:46:24.189327 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Feb 14 01:46:24.189399 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 01:46:24.189465 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Feb 14 01:46:24.189532 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 01:46:24.189598 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Feb 14 01:46:24.189665 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 01:46:24.189732 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Feb 14 01:46:24.189798 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 01:46:24.189866 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.189934 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190001 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190067 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190135 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190205 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190273 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190339 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190406 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190475 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190541 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190606 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190673 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190740 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190806 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.190874 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.190940 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.191005 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Feb 14 01:46:24.191074 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 01:46:24.191141 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Feb 14 01:46:24.191217 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Feb 14 01:46:24.191282 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 01:46:24.191353 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Feb 14 01:46:24.191421 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Feb 14 01:46:24.191490 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Feb 14 01:46:24.191556 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Feb 14 01:46:24.191623 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 01:46:24.191694 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Feb 14 01:46:24.191762 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Feb 14 01:46:24.191829 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Feb 14 01:46:24.191895 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Feb 14 01:46:24.191965 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 01:46:24.192026 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Feb 14 01:46:24.192087 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 01:46:24.192159 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Feb 14 01:46:24.192226 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 01:46:24.192304 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Feb 14 01:46:24.192370 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 01:46:24.192438 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Feb 14 01:46:24.192502 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 01:46:24.192571 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Feb 14 01:46:24.192635 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 01:46:24.192646 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Feb 14 01:46:24.192726 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.192795 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.192866 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.192934 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.193005 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.193070 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Feb 14 01:46:24.193080 kernel: PCI host bridge to bus 0003:00 Feb 14 01:46:24.193151 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Feb 14 01:46:24.193216 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Feb 14 01:46:24.193276 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Feb 14 01:46:24.193349 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.193428 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.193495 kernel: pci 0003:00:01.0: supports D1 D2 Feb 14 01:46:24.193565 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.193637 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.193706 kernel: pci 0003:00:03.0: supports D1 D2 Feb 14 01:46:24.193774 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.193846 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.193916 kernel: pci 0003:00:05.0: supports D1 D2 Feb 14 01:46:24.193981 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.193994 kernel: acpiphp: Slot [1-3] registered Feb 14 01:46:24.194002 kernel: acpiphp: Slot [2-3] registered Feb 14 01:46:24.194078 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Feb 14 01:46:24.194148 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Feb 14 01:46:24.194270 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Feb 14 01:46:24.194341 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Feb 14 01:46:24.194409 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 01:46:24.194475 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Feb 14 01:46:24.194545 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 01:46:24.194611 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Feb 14 01:46:24.194678 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 01:46:24.194746 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Feb 14 01:46:24.194821 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Feb 14 01:46:24.194889 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Feb 14 01:46:24.194955 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Feb 14 01:46:24.195025 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Feb 14 01:46:24.195091 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Feb 14 01:46:24.195158 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Feb 14 01:46:24.195235 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 01:46:24.195304 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Feb 14 01:46:24.195370 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 01:46:24.195429 kernel: pci_bus 0003:00: on NUMA node 0 Feb 14 01:46:24.195500 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.195565 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.195630 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.195696 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.195761 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.195825 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.195892 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Feb 14 01:46:24.195958 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Feb 14 01:46:24.196025 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 14 01:46:24.196090 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 01:46:24.196157 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 14 01:46:24.196227 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 01:46:24.196307 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 14 01:46:24.196377 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 01:46:24.196443 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196512 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196578 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196645 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196710 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196776 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196842 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.196908 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.196974 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.197043 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.197108 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.197176 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.197247 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.197313 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Feb 14 01:46:24.197381 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 01:46:24.197446 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Feb 14 01:46:24.197513 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Feb 14 01:46:24.197581 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 01:46:24.197651 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Feb 14 01:46:24.197724 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Feb 14 01:46:24.197793 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Feb 14 01:46:24.197862 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Feb 14 01:46:24.197931 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Feb 14 01:46:24.198002 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Feb 14 01:46:24.198072 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Feb 14 01:46:24.198141 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Feb 14 01:46:24.198213 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198282 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198350 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198419 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198489 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198558 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198627 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 01:46:24.198696 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 01:46:24.198764 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Feb 14 01:46:24.198830 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Feb 14 01:46:24.198897 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 01:46:24.198961 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 01:46:24.199025 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Feb 14 01:46:24.199084 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Feb 14 01:46:24.199167 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Feb 14 01:46:24.199232 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 01:46:24.199304 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Feb 14 01:46:24.199366 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 01:46:24.199438 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Feb 14 01:46:24.199500 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 01:46:24.199511 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Feb 14 01:46:24.199584 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.199650 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.199714 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.199781 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.199845 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.199912 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Feb 14 01:46:24.199924 kernel: PCI host bridge to bus 000c:00 Feb 14 01:46:24.199991 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Feb 14 01:46:24.200052 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Feb 14 01:46:24.200110 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Feb 14 01:46:24.200192 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.200266 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.200334 kernel: pci 000c:00:01.0: enabling Extended Tags Feb 14 01:46:24.200400 kernel: pci 000c:00:01.0: supports D1 D2 Feb 14 01:46:24.200467 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.200540 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.200608 kernel: pci 000c:00:02.0: supports D1 D2 Feb 14 01:46:24.200678 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.200752 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.200820 kernel: pci 000c:00:03.0: supports D1 D2 Feb 14 01:46:24.200886 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.200959 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.201026 kernel: pci 000c:00:04.0: supports D1 D2 Feb 14 01:46:24.201093 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.201106 kernel: acpiphp: Slot [1-4] registered Feb 14 01:46:24.201114 kernel: acpiphp: Slot [2-4] registered Feb 14 01:46:24.201123 kernel: acpiphp: Slot [3-2] registered Feb 14 01:46:24.201131 kernel: acpiphp: Slot [4-2] registered Feb 14 01:46:24.201445 kernel: pci_bus 000c:00: on NUMA node 0 Feb 14 01:46:24.201526 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.201592 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.201658 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.201728 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.201794 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.201860 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.201925 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.201990 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.202055 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.202123 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.202197 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.202263 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.202330 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Feb 14 01:46:24.202395 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 01:46:24.202460 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Feb 14 01:46:24.202525 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 01:46:24.202590 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Feb 14 01:46:24.202658 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 01:46:24.202723 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Feb 14 01:46:24.202789 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 01:46:24.202853 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.202919 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.202984 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203049 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203113 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203184 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203249 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203314 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203379 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203444 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203508 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203573 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203638 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203702 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203770 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.203836 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.203901 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.203965 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Feb 14 01:46:24.204031 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 01:46:24.204096 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.204162 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Feb 14 01:46:24.204233 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 01:46:24.204299 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.204364 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Feb 14 01:46:24.204429 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 01:46:24.204495 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.204559 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Feb 14 01:46:24.204627 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 01:46:24.204687 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Feb 14 01:46:24.204746 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Feb 14 01:46:24.204816 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Feb 14 01:46:24.204878 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 01:46:24.204955 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Feb 14 01:46:24.205017 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 01:46:24.205088 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Feb 14 01:46:24.205149 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 01:46:24.205221 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Feb 14 01:46:24.205284 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 01:46:24.205294 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Feb 14 01:46:24.205365 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.205432 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.205495 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.205557 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.205620 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.205682 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Feb 14 01:46:24.205693 kernel: PCI host bridge to bus 0002:00 Feb 14 01:46:24.205761 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Feb 14 01:46:24.205821 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Feb 14 01:46:24.205880 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Feb 14 01:46:24.205952 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.206026 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.206091 kernel: pci 0002:00:01.0: supports D1 D2 Feb 14 01:46:24.206157 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206233 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.206301 kernel: pci 0002:00:03.0: supports D1 D2 Feb 14 01:46:24.206366 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206438 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.206503 kernel: pci 0002:00:05.0: supports D1 D2 Feb 14 01:46:24.206568 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206640 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 01:46:24.206708 kernel: pci 0002:00:07.0: supports D1 D2 Feb 14 01:46:24.206773 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.206783 kernel: acpiphp: Slot [1-5] registered Feb 14 01:46:24.206792 kernel: acpiphp: Slot [2-5] registered Feb 14 01:46:24.206800 kernel: acpiphp: Slot [3-3] registered Feb 14 01:46:24.206808 kernel: acpiphp: Slot [4-3] registered Feb 14 01:46:24.206864 kernel: pci_bus 0002:00: on NUMA node 0 Feb 14 01:46:24.206929 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.206994 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.207066 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 01:46:24.207135 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.207204 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.207270 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.207340 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.207406 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.207474 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.207540 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.207606 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.207673 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.207739 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Feb 14 01:46:24.207808 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 01:46:24.207873 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Feb 14 01:46:24.207938 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 01:46:24.208002 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Feb 14 01:46:24.208068 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 01:46:24.208133 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Feb 14 01:46:24.208202 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 01:46:24.208266 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208335 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208399 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208468 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208533 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208598 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208663 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208727 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208793 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208860 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.208926 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.208990 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.209056 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.209120 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.209214 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.209283 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.209348 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.209412 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Feb 14 01:46:24.209481 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 01:46:24.209545 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Feb 14 01:46:24.209609 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Feb 14 01:46:24.209674 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 01:46:24.209739 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Feb 14 01:46:24.209803 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Feb 14 01:46:24.209870 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 01:46:24.209935 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Feb 14 01:46:24.210000 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Feb 14 01:46:24.210064 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 01:46:24.210125 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Feb 14 01:46:24.210188 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Feb 14 01:46:24.210263 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Feb 14 01:46:24.210325 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 01:46:24.210394 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Feb 14 01:46:24.210454 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 01:46:24.210531 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Feb 14 01:46:24.210593 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 01:46:24.210663 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Feb 14 01:46:24.210724 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 01:46:24.210736 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Feb 14 01:46:24.210807 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.210872 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.210937 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.211002 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.211068 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.211132 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Feb 14 01:46:24.211143 kernel: PCI host bridge to bus 0001:00 Feb 14 01:46:24.211211 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Feb 14 01:46:24.211273 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Feb 14 01:46:24.211333 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Feb 14 01:46:24.211409 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 01:46:24.211482 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 01:46:24.211548 kernel: pci 0001:00:01.0: enabling Extended Tags Feb 14 01:46:24.211624 kernel: pci 0001:00:01.0: supports D1 D2 Feb 14 01:46:24.211692 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.211765 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 01:46:24.211832 kernel: pci 0001:00:02.0: supports D1 D2 Feb 14 01:46:24.211900 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.211972 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 01:46:24.212038 kernel: pci 0001:00:03.0: supports D1 D2 Feb 14 01:46:24.212103 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.212177 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 01:46:24.212250 kernel: pci 0001:00:04.0: supports D1 D2 Feb 14 01:46:24.212318 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.212331 kernel: acpiphp: Slot [1-6] registered Feb 14 01:46:24.212404 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 14 01:46:24.212473 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.212541 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Feb 14 01:46:24.212608 kernel: pci 0001:01:00.0: PME# supported from D3cold Feb 14 01:46:24.212676 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 01:46:24.212750 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 14 01:46:24.212822 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 01:46:24.212889 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Feb 14 01:46:24.212956 kernel: pci 0001:01:00.1: PME# supported from D3cold Feb 14 01:46:24.212967 kernel: acpiphp: Slot [2-6] registered Feb 14 01:46:24.212975 kernel: acpiphp: Slot [3-4] registered Feb 14 01:46:24.212983 kernel: acpiphp: Slot [4-4] registered Feb 14 01:46:24.213041 kernel: pci_bus 0001:00: on NUMA node 0 Feb 14 01:46:24.213108 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 01:46:24.213302 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 01:46:24.213394 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.213461 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 01:46:24.213528 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.213593 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.213658 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.213723 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.213792 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.213859 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.213925 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.213990 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Feb 14 01:46:24.214055 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Feb 14 01:46:24.214120 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 01:46:24.214189 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Feb 14 01:46:24.214258 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 01:46:24.214322 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Feb 14 01:46:24.214388 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 01:46:24.214452 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214517 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214580 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214645 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214710 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214777 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214843 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.214907 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.214973 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215037 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215102 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215166 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215234 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215301 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215370 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.215434 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.215502 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 01:46:24.215569 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.215637 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Feb 14 01:46:24.215704 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Feb 14 01:46:24.215768 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Feb 14 01:46:24.215836 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Feb 14 01:46:24.215900 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.215966 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Feb 14 01:46:24.216030 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Feb 14 01:46:24.216095 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 01:46:24.216160 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.216230 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Feb 14 01:46:24.216295 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 01:46:24.216361 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Feb 14 01:46:24.216425 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Feb 14 01:46:24.216491 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 01:46:24.216551 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Feb 14 01:46:24.216609 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Feb 14 01:46:24.216692 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Feb 14 01:46:24.216753 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 01:46:24.216822 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Feb 14 01:46:24.216883 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 01:46:24.216951 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Feb 14 01:46:24.217012 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 01:46:24.217082 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Feb 14 01:46:24.217142 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 01:46:24.217153 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Feb 14 01:46:24.217227 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 01:46:24.217292 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 01:46:24.217355 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Feb 14 01:46:24.217420 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 01:46:24.217484 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Feb 14 01:46:24.217547 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Feb 14 01:46:24.217558 kernel: PCI host bridge to bus 0004:00 Feb 14 01:46:24.217622 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Feb 14 01:46:24.217682 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Feb 14 01:46:24.217739 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Feb 14 01:46:24.217815 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 01:46:24.217887 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 01:46:24.217954 kernel: pci 0004:00:01.0: supports D1 D2 Feb 14 01:46:24.218019 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.218092 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 01:46:24.218158 kernel: pci 0004:00:03.0: supports D1 D2 Feb 14 01:46:24.218228 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.218304 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 01:46:24.218371 kernel: pci 0004:00:05.0: supports D1 D2 Feb 14 01:46:24.218436 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Feb 14 01:46:24.218512 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Feb 14 01:46:24.218581 kernel: pci 0004:01:00.0: enabling Extended Tags Feb 14 01:46:24.218647 kernel: pci 0004:01:00.0: supports D1 D2 Feb 14 01:46:24.218714 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 01:46:24.218795 kernel: pci_bus 0004:02: extended config space not accessible Feb 14 01:46:24.218874 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Feb 14 01:46:24.218944 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Feb 14 01:46:24.219014 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Feb 14 01:46:24.219084 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Feb 14 01:46:24.219153 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Feb 14 01:46:24.219226 kernel: pci 0004:02:00.0: supports D1 D2 Feb 14 01:46:24.219299 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 01:46:24.219377 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Feb 14 01:46:24.219445 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Feb 14 01:46:24.219512 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 01:46:24.219574 kernel: pci_bus 0004:00: on NUMA node 0 Feb 14 01:46:24.219639 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Feb 14 01:46:24.219707 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 01:46:24.219774 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 01:46:24.219840 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 01:46:24.219907 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 01:46:24.219973 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.220038 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 01:46:24.220104 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.220168 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 01:46:24.220241 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Feb 14 01:46:24.220307 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 01:46:24.220373 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Feb 14 01:46:24.220438 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 01:46:24.220504 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220571 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.220635 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220701 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.220769 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220835 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.220900 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.220965 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221031 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.221097 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221161 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.221230 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221299 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.221370 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Feb 14 01:46:24.221439 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Feb 14 01:46:24.221509 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Feb 14 01:46:24.221580 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Feb 14 01:46:24.221650 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Feb 14 01:46:24.221720 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Feb 14 01:46:24.221787 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Feb 14 01:46:24.221857 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.221923 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Feb 14 01:46:24.221988 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.222054 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 01:46:24.222122 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Feb 14 01:46:24.222191 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Feb 14 01:46:24.222257 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Feb 14 01:46:24.222322 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 01:46:24.222390 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Feb 14 01:46:24.222456 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Feb 14 01:46:24.222521 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 01:46:24.222582 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 01:46:24.222640 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Feb 14 01:46:24.222701 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Feb 14 01:46:24.222772 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.222834 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 01:46:24.222900 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 01:46:24.222968 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Feb 14 01:46:24.223030 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 01:46:24.223098 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Feb 14 01:46:24.223161 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 01:46:24.223172 kernel: iommu: Default domain type: Translated Feb 14 01:46:24.223184 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 14 01:46:24.223193 kernel: efivars: Registered efivars operations Feb 14 01:46:24.223264 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Feb 14 01:46:24.223335 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Feb 14 01:46:24.223405 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Feb 14 01:46:24.223416 kernel: vgaarb: loaded Feb 14 01:46:24.223427 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 14 01:46:24.223436 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 01:46:24.223444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 01:46:24.223453 kernel: pnp: PnP ACPI init Feb 14 01:46:24.223524 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Feb 14 01:46:24.223585 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Feb 14 01:46:24.223646 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Feb 14 01:46:24.223708 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Feb 14 01:46:24.223768 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Feb 14 01:46:24.223828 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Feb 14 01:46:24.223889 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Feb 14 01:46:24.223949 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Feb 14 01:46:24.223960 kernel: pnp: PnP ACPI: found 1 devices Feb 14 01:46:24.223968 kernel: NET: Registered PF_INET protocol family Feb 14 01:46:24.223977 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.223987 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 14 01:46:24.223996 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 01:46:24.224004 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 14 01:46:24.224012 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.224021 kernel: TCP: Hash tables configured (established 524288 bind 65536) Feb 14 01:46:24.224029 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.224037 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 01:46:24.224046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 01:46:24.224114 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Feb 14 01:46:24.224127 kernel: kvm [1]: IPA Size Limit: 48 bits Feb 14 01:46:24.224136 kernel: kvm [1]: GICv3: no GICV resource entry Feb 14 01:46:24.224144 kernel: kvm [1]: disabling GICv2 emulation Feb 14 01:46:24.224152 kernel: kvm [1]: GIC system register CPU interface enabled Feb 14 01:46:24.224161 kernel: kvm [1]: vgic interrupt IRQ9 Feb 14 01:46:24.224169 kernel: kvm [1]: VHE mode initialized successfully Feb 14 01:46:24.224177 kernel: Initialise system trusted keyrings Feb 14 01:46:24.224189 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Feb 14 01:46:24.224198 kernel: Key type asymmetric registered Feb 14 01:46:24.224207 kernel: Asymmetric key parser 'x509' registered Feb 14 01:46:24.224215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 14 01:46:24.224223 kernel: io scheduler mq-deadline registered Feb 14 01:46:24.224232 kernel: io scheduler kyber registered Feb 14 01:46:24.224240 kernel: io scheduler bfq registered Feb 14 01:46:24.224248 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 14 01:46:24.224256 kernel: ACPI: button: Power Button [PWRB] Feb 14 01:46:24.224265 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Feb 14 01:46:24.224273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 01:46:24.224351 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Feb 14 01:46:24.224414 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.224477 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.224538 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.224600 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Feb 14 01:46:24.224661 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Feb 14 01:46:24.224733 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Feb 14 01:46:24.224794 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.224856 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.224917 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.224978 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Feb 14 01:46:24.225040 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Feb 14 01:46:24.225108 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Feb 14 01:46:24.225173 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.225238 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.225303 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.225364 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Feb 14 01:46:24.225427 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Feb 14 01:46:24.225496 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Feb 14 01:46:24.225562 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.225624 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.225687 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.225748 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Feb 14 01:46:24.225812 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Feb 14 01:46:24.225890 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Feb 14 01:46:24.225953 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.226018 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.226079 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.226143 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Feb 14 01:46:24.226207 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Feb 14 01:46:24.226281 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Feb 14 01:46:24.226344 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.226409 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.226471 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.226533 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Feb 14 01:46:24.226595 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Feb 14 01:46:24.226665 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Feb 14 01:46:24.226727 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.226790 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.226855 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.226918 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Feb 14 01:46:24.226984 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Feb 14 01:46:24.227051 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Feb 14 01:46:24.227115 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 01:46:24.227177 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 01:46:24.227246 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Feb 14 01:46:24.227309 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Feb 14 01:46:24.227372 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Feb 14 01:46:24.227383 kernel: thunder_xcv, ver 1.0 Feb 14 01:46:24.227391 kernel: thunder_bgx, ver 1.0 Feb 14 01:46:24.227400 kernel: nicpf, ver 1.0 Feb 14 01:46:24.227408 kernel: nicvf, ver 1.0 Feb 14 01:46:24.227477 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 14 01:46:24.227544 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-14T01:46:22 UTC (1739497582) Feb 14 01:46:24.227555 kernel: efifb: probing for efifb Feb 14 01:46:24.227563 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Feb 14 01:46:24.227572 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Feb 14 01:46:24.227580 kernel: efifb: scrolling: redraw Feb 14 01:46:24.227588 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 14 01:46:24.227597 kernel: Console: switching to colour frame buffer device 100x37 Feb 14 01:46:24.227605 kernel: fb0: EFI VGA frame buffer device Feb 14 01:46:24.227615 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Feb 14 01:46:24.227624 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 01:46:24.227632 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 14 01:46:24.227640 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 14 01:46:24.227648 kernel: watchdog: Hard watchdog permanently disabled Feb 14 01:46:24.227657 kernel: NET: Registered PF_INET6 protocol family Feb 14 01:46:24.227665 kernel: Segment Routing with IPv6 Feb 14 01:46:24.227673 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 01:46:24.227681 kernel: NET: Registered PF_PACKET protocol family Feb 14 01:46:24.227689 kernel: Key type dns_resolver registered Feb 14 01:46:24.227698 kernel: registered taskstats version 1 Feb 14 01:46:24.227707 kernel: Loading compiled-in X.509 certificates Feb 14 01:46:24.227715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 14 01:46:24.227723 kernel: Key type .fscrypt registered Feb 14 01:46:24.227731 kernel: Key type fscrypt-provisioning registered Feb 14 01:46:24.227741 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 01:46:24.227749 kernel: ima: Allocated hash algorithm: sha1 Feb 14 01:46:24.227757 kernel: ima: No architecture policies found Feb 14 01:46:24.227766 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 14 01:46:24.227836 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Feb 14 01:46:24.227906 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Feb 14 01:46:24.227974 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Feb 14 01:46:24.228041 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Feb 14 01:46:24.228109 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Feb 14 01:46:24.228177 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Feb 14 01:46:24.228250 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Feb 14 01:46:24.228316 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Feb 14 01:46:24.228388 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Feb 14 01:46:24.228454 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228523 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Feb 14 01:46:24.228590 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228658 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Feb 14 01:46:24.228725 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228794 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Feb 14 01:46:24.228860 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Feb 14 01:46:24.228933 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Feb 14 01:46:24.228999 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229068 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Feb 14 01:46:24.229134 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229206 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Feb 14 01:46:24.229272 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229340 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Feb 14 01:46:24.229408 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Feb 14 01:46:24.229476 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Feb 14 01:46:24.229545 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Feb 14 01:46:24.229612 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Feb 14 01:46:24.229680 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Feb 14 01:46:24.229747 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Feb 14 01:46:24.229814 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Feb 14 01:46:24.229882 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Feb 14 01:46:24.229950 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230018 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Feb 14 01:46:24.230087 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230154 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Feb 14 01:46:24.230224 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230292 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Feb 14 01:46:24.230362 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Feb 14 01:46:24.230431 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Feb 14 01:46:24.230498 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230565 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Feb 14 01:46:24.230634 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230702 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Feb 14 01:46:24.230768 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230836 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Feb 14 01:46:24.230902 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Feb 14 01:46:24.230971 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Feb 14 01:46:24.231036 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231104 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Feb 14 01:46:24.231172 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231246 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Feb 14 01:46:24.231312 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231380 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Feb 14 01:46:24.231447 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Feb 14 01:46:24.231516 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Feb 14 01:46:24.231583 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Feb 14 01:46:24.231650 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Feb 14 01:46:24.231721 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Feb 14 01:46:24.231789 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Feb 14 01:46:24.231857 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Feb 14 01:46:24.231927 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Feb 14 01:46:24.231938 kernel: clk: Disabling unused clocks Feb 14 01:46:24.231946 kernel: Freeing unused kernel memory: 39360K Feb 14 01:46:24.231954 kernel: Run /init as init process Feb 14 01:46:24.231963 kernel: with arguments: Feb 14 01:46:24.231973 kernel: /init Feb 14 01:46:24.231981 kernel: with environment: Feb 14 01:46:24.231989 kernel: HOME=/ Feb 14 01:46:24.231997 kernel: TERM=linux Feb 14 01:46:24.232005 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 01:46:24.232016 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:46:24.232026 systemd[1]: Detected architecture arm64. Feb 14 01:46:24.232035 systemd[1]: Running in initrd. Feb 14 01:46:24.232045 systemd[1]: No hostname configured, using default hostname. Feb 14 01:46:24.232054 systemd[1]: Hostname set to . Feb 14 01:46:24.232062 systemd[1]: Initializing machine ID from random generator. Feb 14 01:46:24.232072 systemd[1]: Queued start job for default target initrd.target. Feb 14 01:46:24.232081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:46:24.232089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:46:24.232099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 01:46:24.232108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:46:24.232118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 01:46:24.232127 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 01:46:24.232136 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 01:46:24.232146 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 01:46:24.232154 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:46:24.232163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:46:24.232171 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:46:24.232186 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:46:24.232195 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:46:24.232203 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:46:24.232212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:46:24.232220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:46:24.232229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 01:46:24.232238 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 01:46:24.232246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:46:24.232257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:46:24.232266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:46:24.232274 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:46:24.232283 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 01:46:24.232291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:46:24.232300 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 01:46:24.232309 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 01:46:24.232317 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:46:24.232326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:46:24.232360 systemd-journald[901]: Collecting audit messages is disabled. Feb 14 01:46:24.232381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:24.232390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 01:46:24.232399 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 01:46:24.232409 kernel: Bridge firewalling registered Feb 14 01:46:24.232418 systemd-journald[901]: Journal started Feb 14 01:46:24.232437 systemd-journald[901]: Runtime Journal (/run/log/journal/b8f0334ac3dd44128ddc77c1b71d4a2d) is 8.0M, max 4.0G, 3.9G free. Feb 14 01:46:24.188801 systemd-modules-load[903]: Inserted module 'overlay' Feb 14 01:46:24.263878 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:46:24.211790 systemd-modules-load[903]: Inserted module 'br_netfilter' Feb 14 01:46:24.269429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:46:24.280108 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 01:46:24.290953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:46:24.301588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:24.330306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:46:24.360330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:46:24.366541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:46:24.377526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:46:24.393737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:24.409833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:46:24.426389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:46:24.437629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:46:24.466283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 01:46:24.479582 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:46:24.486077 dracut-cmdline[946]: dracut-dracut-053 Feb 14 01:46:24.499096 dracut-cmdline[946]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 01:46:24.493237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:46:24.507251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:46:24.516305 systemd-resolved[955]: Positive Trust Anchors: Feb 14 01:46:24.516314 systemd-resolved[955]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:46:24.516347 systemd-resolved[955]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:46:24.531343 systemd-resolved[955]: Defaulting to hostname 'linux'. Feb 14 01:46:24.544199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:46:24.660271 kernel: SCSI subsystem initialized Feb 14 01:46:24.563364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:46:24.676198 kernel: Loading iSCSI transport class v2.0-870. Feb 14 01:46:24.689194 kernel: iscsi: registered transport (tcp) Feb 14 01:46:24.716884 kernel: iscsi: registered transport (qla4xxx) Feb 14 01:46:24.716906 kernel: QLogic iSCSI HBA Driver Feb 14 01:46:24.762221 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 01:46:24.786353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 01:46:24.831294 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 01:46:24.831312 kernel: device-mapper: uevent: version 1.0.3 Feb 14 01:46:24.841001 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 01:46:24.906190 kernel: raid6: neonx8 gen() 15848 MB/s Feb 14 01:46:24.932189 kernel: raid6: neonx4 gen() 15716 MB/s Feb 14 01:46:24.957189 kernel: raid6: neonx2 gen() 13476 MB/s Feb 14 01:46:24.982189 kernel: raid6: neonx1 gen() 10551 MB/s Feb 14 01:46:25.007189 kernel: raid6: int64x8 gen() 6985 MB/s Feb 14 01:46:25.032189 kernel: raid6: int64x4 gen() 7384 MB/s Feb 14 01:46:25.057189 kernel: raid6: int64x2 gen() 6152 MB/s Feb 14 01:46:25.085505 kernel: raid6: int64x1 gen() 5075 MB/s Feb 14 01:46:25.085526 kernel: raid6: using algorithm neonx8 gen() 15848 MB/s Feb 14 01:46:25.119984 kernel: raid6: .... xor() 11969 MB/s, rmw enabled Feb 14 01:46:25.120005 kernel: raid6: using neon recovery algorithm Feb 14 01:46:25.143311 kernel: xor: measuring software checksum speed Feb 14 01:46:25.143335 kernel: 8regs : 19807 MB/sec Feb 14 01:46:25.151390 kernel: 32regs : 19646 MB/sec Feb 14 01:46:25.159204 kernel: arm64_neon : 27204 MB/sec Feb 14 01:46:25.166898 kernel: xor: using function: arm64_neon (27204 MB/sec) Feb 14 01:46:25.228188 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 01:46:25.237795 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:46:25.250323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:46:25.263486 systemd-udevd[1146]: Using default interface naming scheme 'v255'. Feb 14 01:46:25.266562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:46:25.287283 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 01:46:25.301397 dracut-pre-trigger[1156]: rd.md=0: removing MD RAID activation Feb 14 01:46:25.327878 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:46:25.348342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:46:25.453954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:46:25.483571 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 14 01:46:25.483594 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 14 01:46:25.485298 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 01:46:25.643618 kernel: ACPI: bus type USB registered Feb 14 01:46:25.643653 kernel: usbcore: registered new interface driver usbfs Feb 14 01:46:25.643675 kernel: usbcore: registered new interface driver hub Feb 14 01:46:25.643695 kernel: usbcore: registered new device driver usb Feb 14 01:46:25.643711 kernel: PTP clock support registered Feb 14 01:46:25.643721 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 Feb 14 01:46:25.893891 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 01:46:25.893987 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 01:46:25.894073 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Feb 14 01:46:25.894155 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 14 01:46:25.894166 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 Feb 14 01:46:26.532900 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 14 01:46:26.532924 kernel: igb 0003:03:00.0: Adding to iommu group 33 Feb 14 01:46:26.533096 kernel: nvme 0005:03:00.0: Adding to iommu group 34 Feb 14 01:46:26.533199 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Feb 14 01:46:26.533289 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 Feb 14 01:46:26.533371 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 01:46:26.533454 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 01:46:26.533536 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 01:46:26.533614 kernel: hub 1-0:1.0: USB hub found Feb 14 01:46:26.533721 kernel: hub 1-0:1.0: 4 ports detected Feb 14 01:46:26.533808 kernel: mlx5_core 0001:01:00.0: firmware version: 14.30.1004 Feb 14 01:46:26.533889 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 01:46:26.533968 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 01:46:26.534105 kernel: hub 2-0:1.0: USB hub found Feb 14 01:46:26.534214 kernel: hub 2-0:1.0: 4 ports detected Feb 14 01:46:26.534304 kernel: nvme nvme0: pci function 0005:03:00.0 Feb 14 01:46:26.534396 kernel: nvme nvme1: pci function 0005:04:00.0 Feb 14 01:46:26.534480 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Feb 14 01:46:26.534557 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Feb 14 01:46:26.534632 kernel: igb 0003:03:00.0: added PHC on eth0 Feb 14 01:46:26.534717 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 14 01:46:26.534798 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6c Feb 14 01:46:26.534876 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Feb 14 01:46:26.534955 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 01:46:26.535034 kernel: igb 0003:03:00.1: Adding to iommu group 36 Feb 14 01:46:26.535119 kernel: nvme nvme0: 32/0/0 default/read/poll queues Feb 14 01:46:26.535203 kernel: nvme nvme1: 32/0/0 default/read/poll queues Feb 14 01:46:26.535280 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 01:46:26.535292 kernel: GPT:9289727 != 1875385007 Feb 14 01:46:26.535302 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 01:46:26.535311 kernel: GPT:9289727 != 1875385007 Feb 14 01:46:26.535321 kernel: igb 0003:03:00.1: added PHC on eth1 Feb 14 01:46:26.535401 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 01:46:26.535411 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:26.535424 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Feb 14 01:46:26.535504 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Feb 14 01:46:26.535638 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:6d Feb 14 01:46:26.535721 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1207) Feb 14 01:46:26.535732 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Feb 14 01:46:26.535810 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (1225) Feb 14 01:46:26.535821 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 01:46:26.535903 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Feb 14 01:46:26.535983 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Feb 14 01:46:26.536063 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Feb 14 01:46:26.536143 kernel: hub 1-3:1.0: USB hub found Feb 14 01:46:26.536246 kernel: hub 1-3:1.0: 4 ports detected Feb 14 01:46:26.536336 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:26.536347 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:26.536357 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Feb 14 01:46:26.536489 kernel: hub 2-3:1.0: USB hub found Feb 14 01:46:26.536588 kernel: hub 2-3:1.0: 4 ports detected Feb 14 01:46:26.536677 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 01:46:26.536761 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Feb 14 01:46:27.122399 kernel: mlx5_core 0001:01:00.1: firmware version: 14.30.1004 Feb 14 01:46:27.122538 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 01:46:27.122618 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Feb 14 01:46:27.122698 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 01:46:25.547436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:46:27.138457 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Feb 14 01:46:25.547587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:27.159716 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Feb 14 01:46:25.671805 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:46:25.677577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:46:25.677733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:25.683508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:25.698527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:25.704613 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 01:46:27.198600 disk-uuid[1309]: Primary Header is updated. Feb 14 01:46:27.198600 disk-uuid[1309]: Secondary Entries is updated. Feb 14 01:46:27.198600 disk-uuid[1309]: Secondary Header is updated. Feb 14 01:46:25.711749 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:46:25.717431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:46:25.722964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:46:25.740345 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 01:46:25.746266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:46:25.746340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:25.753468 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:46:25.767292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:25.777138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:25.886285 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 01:46:26.044330 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:26.178014 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Feb 14 01:46:26.278820 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Feb 14 01:46:26.287976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Feb 14 01:46:26.295952 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 01:46:26.300407 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 01:46:26.317329 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 01:46:27.359682 disk-uuid[1310]: The operation has completed successfully. Feb 14 01:46:27.365335 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 01:46:27.384798 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 01:46:27.384882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 01:46:27.419327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 01:46:27.430501 sh[1487]: Success Feb 14 01:46:27.449186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 14 01:46:27.482380 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 01:46:27.503398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 01:46:27.514823 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 01:46:27.609013 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 14 01:46:27.609039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:27.609059 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 01:46:27.609086 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 01:46:27.609106 kernel: BTRFS info (device dm-0): using free space tree Feb 14 01:46:27.609125 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 14 01:46:27.615158 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 01:46:27.622470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 01:46:27.636347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 01:46:27.716014 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:27.716029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:27.716040 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 01:46:27.716050 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 01:46:27.716060 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 01:46:27.643575 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 01:46:27.752550 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:27.742811 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 01:46:27.771310 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 01:46:27.827245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:46:27.836339 ignition[1590]: Ignition 2.19.0 Feb 14 01:46:27.836345 ignition[1590]: Stage: fetch-offline Feb 14 01:46:27.843674 unknown[1590]: fetched base config from "system" Feb 14 01:46:27.836401 ignition[1590]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:27.843682 unknown[1590]: fetched user config from "system" Feb 14 01:46:27.836409 ignition[1590]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:27.856406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:46:27.836705 ignition[1590]: parsed url from cmdline: "" Feb 14 01:46:27.864043 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:46:27.836708 ignition[1590]: no config URL provided Feb 14 01:46:27.880095 systemd-networkd[1729]: lo: Link UP Feb 14 01:46:27.836712 ignition[1590]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 01:46:27.880099 systemd-networkd[1729]: lo: Gained carrier Feb 14 01:46:27.836763 ignition[1590]: parsing config with SHA512: e0682aa1e70f1d06cb393943ecc32de9ba1c769cd8dbad1b366262e529c60d403786d44f6970e26b9361bd92edf746b325df0a836b8285effa3f6eb191d2e5e3 Feb 14 01:46:27.883729 systemd-networkd[1729]: Enumeration completed Feb 14 01:46:27.844267 ignition[1590]: fetch-offline: fetch-offline passed Feb 14 01:46:27.883847 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:46:27.844271 ignition[1590]: POST message to Packet Timeline Feb 14 01:46:27.884846 systemd-networkd[1729]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:27.844276 ignition[1590]: POST Status error: resource requires networking Feb 14 01:46:27.889277 systemd[1]: Reached target network.target - Network. Feb 14 01:46:27.844344 ignition[1590]: Ignition finished successfully Feb 14 01:46:27.898969 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 14 01:46:27.936692 ignition[1735]: Ignition 2.19.0 Feb 14 01:46:27.912342 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 01:46:27.936697 ignition[1735]: Stage: kargs Feb 14 01:46:27.936639 systemd-networkd[1729]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:27.936983 ignition[1735]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:27.987768 systemd-networkd[1729]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:27.936992 ignition[1735]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:27.938160 ignition[1735]: kargs: kargs passed Feb 14 01:46:27.938164 ignition[1735]: POST message to Packet Timeline Feb 14 01:46:27.938178 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:27.940873 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50095->[::1]:53: read: connection refused Feb 14 01:46:28.140951 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #2 Feb 14 01:46:28.141353 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49058->[::1]:53: read: connection refused Feb 14 01:46:28.536195 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Feb 14 01:46:28.538996 systemd-networkd[1729]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 01:46:28.541509 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #3 Feb 14 01:46:28.541910 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34009->[::1]:53: read: connection refused Feb 14 01:46:29.120192 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Feb 14 01:46:29.122826 systemd-networkd[1729]: eno1: Link UP Feb 14 01:46:29.122955 systemd-networkd[1729]: eno2: Link UP Feb 14 01:46:29.123070 systemd-networkd[1729]: enP1p1s0f0np0: Link UP Feb 14 01:46:29.123221 systemd-networkd[1729]: enP1p1s0f0np0: Gained carrier Feb 14 01:46:29.135404 systemd-networkd[1729]: enP1p1s0f1np1: Link UP Feb 14 01:46:29.168211 systemd-networkd[1729]: enP1p1s0f0np0: DHCPv4 address 147.75.62.106/30, gateway 147.75.62.105 acquired from 147.28.144.140 Feb 14 01:46:29.342041 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #4 Feb 14 01:46:29.342482 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41308->[::1]:53: read: connection refused Feb 14 01:46:29.544347 systemd-networkd[1729]: enP1p1s0f1np1: Gained carrier Feb 14 01:46:30.152411 systemd-networkd[1729]: enP1p1s0f0np0: Gained IPv6LL Feb 14 01:46:30.943787 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #5 Feb 14 01:46:30.944468 ignition[1735]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38373->[::1]:53: read: connection refused Feb 14 01:46:31.304384 systemd-networkd[1729]: enP1p1s0f1np1: Gained IPv6LL Feb 14 01:46:34.147319 ignition[1735]: GET https://metadata.packet.net/metadata: attempt #6 Feb 14 01:46:35.198798 ignition[1735]: GET result: OK Feb 14 01:46:35.488506 ignition[1735]: Ignition finished successfully Feb 14 01:46:35.492276 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 01:46:35.508295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 01:46:35.523868 ignition[1754]: Ignition 2.19.0 Feb 14 01:46:35.523875 ignition[1754]: Stage: disks Feb 14 01:46:35.524073 ignition[1754]: no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:35.524082 ignition[1754]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:35.525367 ignition[1754]: disks: disks passed Feb 14 01:46:35.525372 ignition[1754]: POST message to Packet Timeline Feb 14 01:46:35.525386 ignition[1754]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:36.308811 ignition[1754]: GET result: OK Feb 14 01:46:36.632311 ignition[1754]: Ignition finished successfully Feb 14 01:46:36.635700 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 01:46:36.641229 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 01:46:36.648879 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 01:46:36.656970 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:46:36.665653 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:46:36.674725 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:46:36.693325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 01:46:36.708639 systemd-fsck[1777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 14 01:46:36.712643 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 01:46:36.729255 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 01:46:36.794011 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 01:46:36.799004 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 14 01:46:36.804302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 01:46:36.829240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:46:36.921531 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1787) Feb 14 01:46:36.921549 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:36.921560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:36.921570 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 01:46:36.921583 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 01:46:36.921593 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 01:46:36.835328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 01:46:36.931694 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 14 01:46:36.938464 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 14 01:46:36.955574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 01:46:36.955603 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:46:36.988023 coreos-metadata[1808]: Feb 14 01:46:36.985 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 01:46:37.004648 coreos-metadata[1807]: Feb 14 01:46:36.985 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 01:46:36.968700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:46:36.982660 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 01:46:37.002394 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 01:46:37.037504 initrd-setup-root[1826]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 01:46:37.043567 initrd-setup-root[1833]: cut: /sysroot/etc/group: No such file or directory Feb 14 01:46:37.049960 initrd-setup-root[1841]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 01:46:37.056156 initrd-setup-root[1849]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 01:46:37.124832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 01:46:37.146255 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 01:46:37.176877 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:37.152663 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 01:46:37.183193 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 01:46:37.198618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 01:46:37.204142 coreos-metadata[1807]: Feb 14 01:46:37.199 INFO Fetch successful Feb 14 01:46:37.215019 ignition[1922]: INFO : Ignition 2.19.0 Feb 14 01:46:37.215019 ignition[1922]: INFO : Stage: mount Feb 14 01:46:37.215019 ignition[1922]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:37.215019 ignition[1922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:37.215019 ignition[1922]: INFO : mount: mount passed Feb 14 01:46:37.215019 ignition[1922]: INFO : POST message to Packet Timeline Feb 14 01:46:37.215019 ignition[1922]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:37.259582 coreos-metadata[1807]: Feb 14 01:46:37.243 INFO wrote hostname ci-4081.3.1-a-385c1ddb28 to /sysroot/etc/hostname Feb 14 01:46:37.246338 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 01:46:38.174396 ignition[1922]: INFO : GET result: OK Feb 14 01:46:38.253535 coreos-metadata[1808]: Feb 14 01:46:38.253 INFO Fetch successful Feb 14 01:46:38.300148 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 14 01:46:38.300241 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 14 01:46:38.556939 ignition[1922]: INFO : Ignition finished successfully Feb 14 01:46:38.559076 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 01:46:38.577292 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 01:46:38.589782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 01:46:38.625760 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1948) Feb 14 01:46:38.625796 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 01:46:38.640183 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 01:46:38.653244 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 01:46:38.676250 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 01:46:38.676272 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 01:46:38.684409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 01:46:38.716389 ignition[1966]: INFO : Ignition 2.19.0 Feb 14 01:46:38.716389 ignition[1966]: INFO : Stage: files Feb 14 01:46:38.725974 ignition[1966]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:38.725974 ignition[1966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:38.725974 ignition[1966]: DEBUG : files: compiled without relabeling support, skipping Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 01:46:38.725974 ignition[1966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 14 01:46:38.725974 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 14 01:46:38.721458 unknown[1966]: wrote ssh authorized keys file for user: core Feb 14 01:46:38.881521 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 01:46:38.979084 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:38.989720 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 14 01:46:39.189477 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 01:46:39.579458 ignition[1966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 14 01:46:39.579458 ignition[1966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 01:46:39.603970 ignition[1966]: INFO : files: files passed Feb 14 01:46:39.603970 ignition[1966]: INFO : POST message to Packet Timeline Feb 14 01:46:39.603970 ignition[1966]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:41.098671 ignition[1966]: INFO : GET result: OK Feb 14 01:46:41.447582 ignition[1966]: INFO : Ignition finished successfully Feb 14 01:46:41.451347 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 01:46:41.465305 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 01:46:41.472163 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 01:46:41.484012 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 01:46:41.484088 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 01:46:41.519331 initrd-setup-root-after-ignition[2012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:46:41.519331 initrd-setup-root-after-ignition[2012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:46:41.502271 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:46:41.565647 initrd-setup-root-after-ignition[2016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 01:46:41.515106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 01:46:41.541364 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 01:46:41.579637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 01:46:41.579710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 01:46:41.589833 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 01:46:41.605852 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 01:46:41.617274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 01:46:41.627337 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 01:46:41.650290 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:46:41.680361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 01:46:41.702374 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:46:41.708291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:46:41.719876 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 01:46:41.731413 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 01:46:41.731511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 01:46:41.743119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 01:46:41.754400 systemd[1]: Stopped target basic.target - Basic System. Feb 14 01:46:41.765810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 01:46:41.777221 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 01:46:41.788431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 01:46:41.799693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 01:46:41.810936 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 01:46:41.822212 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 01:46:41.833461 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 01:46:41.850175 systemd[1]: Stopped target swap.target - Swaps. Feb 14 01:46:41.861543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 01:46:41.861634 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 01:46:41.873103 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:46:41.884227 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:46:41.895212 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 01:46:41.896233 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:46:41.906309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 01:46:41.906400 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 01:46:41.917642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 01:46:41.917729 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 01:46:41.928777 systemd[1]: Stopped target paths.target - Path Units. Feb 14 01:46:41.939782 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 01:46:41.944203 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:46:41.956659 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 01:46:41.967937 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 01:46:41.979495 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 01:46:41.979600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 01:46:42.084656 ignition[2041]: INFO : Ignition 2.19.0 Feb 14 01:46:42.084656 ignition[2041]: INFO : Stage: umount Feb 14 01:46:42.084656 ignition[2041]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 01:46:42.084656 ignition[2041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 01:46:42.084656 ignition[2041]: INFO : umount: umount passed Feb 14 01:46:42.084656 ignition[2041]: INFO : POST message to Packet Timeline Feb 14 01:46:42.084656 ignition[2041]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 01:46:41.990913 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 01:46:41.991010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 01:46:42.002460 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 01:46:42.002545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 01:46:42.013898 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 01:46:42.013975 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 01:46:42.025326 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 14 01:46:42.025405 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 01:46:42.048301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 01:46:42.055007 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 01:46:42.066969 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 01:46:42.067070 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:46:42.078971 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 01:46:42.079054 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 01:46:42.092548 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 01:46:42.093978 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 01:46:42.094064 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 01:46:42.129423 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 01:46:42.129529 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 01:46:43.084973 ignition[2041]: INFO : GET result: OK Feb 14 01:46:43.390042 ignition[2041]: INFO : Ignition finished successfully Feb 14 01:46:43.392714 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 01:46:43.392924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 01:46:43.400088 systemd[1]: Stopped target network.target - Network. Feb 14 01:46:43.409002 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 01:46:43.409057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 01:46:43.418541 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 01:46:43.418573 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 01:46:43.427975 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 01:46:43.428021 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 01:46:43.437552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 01:46:43.437596 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 01:46:43.447234 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 01:46:43.447260 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 01:46:43.457085 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 01:46:43.465205 systemd-networkd[1729]: enP1p1s0f1np1: DHCPv6 lease lost Feb 14 01:46:43.466558 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 01:46:43.476231 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 01:46:43.476341 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 01:46:43.477289 systemd-networkd[1729]: enP1p1s0f0np0: DHCPv6 lease lost Feb 14 01:46:43.488063 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 01:46:43.488174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:46:43.496398 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 01:46:43.496581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 01:46:43.506641 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 01:46:43.506838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:46:43.526322 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 01:46:43.535222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 01:46:43.535285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 01:46:43.545317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 01:46:43.545349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:46:43.555191 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 01:46:43.555219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 01:46:43.565471 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:46:43.587547 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 01:46:43.587674 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:46:43.598748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 01:46:43.598871 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 01:46:43.607896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 01:46:43.607947 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:46:43.618606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 01:46:43.618649 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 01:46:43.629557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 01:46:43.629595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 01:46:43.640195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 01:46:43.640241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 01:46:43.663286 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 01:46:43.673229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 01:46:43.673292 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:46:43.684350 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 14 01:46:43.684395 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:46:43.695421 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 01:46:43.695449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:46:43.706749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 01:46:43.706778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:43.718650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 01:46:43.718721 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 01:46:44.217633 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 01:46:44.217757 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 01:46:44.230354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 01:46:44.252327 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 01:46:44.262140 systemd[1]: Switching root. Feb 14 01:46:44.316287 systemd-journald[901]: Journal stopped Feb 14 01:46:46.256812 systemd-journald[901]: Received SIGTERM from PID 1 (systemd). Feb 14 01:46:46.256839 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 01:46:46.256849 kernel: SELinux: policy capability open_perms=1 Feb 14 01:46:46.256858 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 01:46:46.256865 kernel: SELinux: policy capability always_check_network=0 Feb 14 01:46:46.256873 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 01:46:46.256881 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 01:46:46.256891 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 01:46:46.256899 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 01:46:46.256907 kernel: audit: type=1403 audit(1739497604.491:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 01:46:46.256916 systemd[1]: Successfully loaded SELinux policy in 113.836ms. Feb 14 01:46:46.256926 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.511ms. Feb 14 01:46:46.256936 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 01:46:46.256945 systemd[1]: Detected architecture arm64. Feb 14 01:46:46.256956 systemd[1]: Detected first boot. Feb 14 01:46:46.256965 systemd[1]: Hostname set to . Feb 14 01:46:46.256975 systemd[1]: Initializing machine ID from random generator. Feb 14 01:46:46.256984 zram_generator::config[2122]: No configuration found. Feb 14 01:46:46.256995 systemd[1]: Populated /etc with preset unit settings. Feb 14 01:46:46.257004 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 14 01:46:46.257013 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 14 01:46:46.257022 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 14 01:46:46.257031 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 01:46:46.257041 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 01:46:46.257050 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 01:46:46.257059 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 01:46:46.257070 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 01:46:46.257079 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 01:46:46.257088 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 01:46:46.257097 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 01:46:46.257106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 01:46:46.257116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 01:46:46.257125 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 01:46:46.257136 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 01:46:46.257145 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 01:46:46.257155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 01:46:46.257164 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 14 01:46:46.257173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 01:46:46.257185 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 14 01:46:46.257195 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 14 01:46:46.257206 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 14 01:46:46.257219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 01:46:46.257230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 01:46:46.257240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 01:46:46.257249 systemd[1]: Reached target slices.target - Slice Units. Feb 14 01:46:46.257258 systemd[1]: Reached target swap.target - Swaps. Feb 14 01:46:46.257268 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 01:46:46.257277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 01:46:46.257286 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 01:46:46.257297 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 01:46:46.257306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 01:46:46.257316 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 01:46:46.257326 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 01:46:46.257335 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 01:46:46.257346 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 01:46:46.257355 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 01:46:46.257365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 01:46:46.257374 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 01:46:46.257384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 01:46:46.257394 systemd[1]: Reached target machines.target - Containers. Feb 14 01:46:46.257403 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 01:46:46.257413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:46:46.257424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 01:46:46.257433 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 01:46:46.257443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:46:46.257452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 01:46:46.257462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:46:46.257471 kernel: ACPI: bus type drm_connector registered Feb 14 01:46:46.257480 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 01:46:46.257489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:46:46.257498 kernel: fuse: init (API version 7.39) Feb 14 01:46:46.257508 kernel: loop: module loaded Feb 14 01:46:46.257517 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 01:46:46.257526 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 14 01:46:46.257536 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 14 01:46:46.257545 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 14 01:46:46.257554 systemd[1]: Stopped systemd-fsck-usr.service. Feb 14 01:46:46.257564 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 01:46:46.257574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 01:46:46.257598 systemd-journald[2225]: Collecting audit messages is disabled. Feb 14 01:46:46.257617 systemd-journald[2225]: Journal started Feb 14 01:46:46.257637 systemd-journald[2225]: Runtime Journal (/run/log/journal/dd1f4519de4a4fb2a5d79b60c8c238d1) is 8.0M, max 4.0G, 3.9G free. Feb 14 01:46:45.009519 systemd[1]: Queued start job for default target multi-user.target. Feb 14 01:46:45.029483 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 14 01:46:45.029824 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 14 01:46:45.031356 systemd[1]: systemd-journald.service: Consumed 3.463s CPU time. Feb 14 01:46:46.281198 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 01:46:46.308194 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 01:46:46.328183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 01:46:46.350966 systemd[1]: verity-setup.service: Deactivated successfully. Feb 14 01:46:46.351002 systemd[1]: Stopped verity-setup.service. Feb 14 01:46:46.375198 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 01:46:46.380638 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 01:46:46.385993 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 01:46:46.391283 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 01:46:46.396441 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 01:46:46.401613 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 01:46:46.406745 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 01:46:46.411947 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 01:46:46.417257 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 01:46:46.422571 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 01:46:46.422726 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 01:46:46.427947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:46:46.429211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:46:46.434433 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 01:46:46.435290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 01:46:46.440456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:46:46.440596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:46:46.445609 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 01:46:46.445740 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 01:46:46.450799 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:46:46.450942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:46:46.455903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 01:46:46.460696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 01:46:46.465715 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 01:46:46.470730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 01:46:46.485802 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 01:46:46.511403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 01:46:46.517271 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 01:46:46.522079 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 01:46:46.522111 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 01:46:46.527641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 01:46:46.533240 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 01:46:46.539088 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 01:46:46.543910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:46:46.545508 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 01:46:46.551120 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 01:46:46.555851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 01:46:46.556901 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 01:46:46.561584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 01:46:46.562380 systemd-journald[2225]: Time spent on flushing to /var/log/journal/dd1f4519de4a4fb2a5d79b60c8c238d1 is 25.847ms for 2349 entries. Feb 14 01:46:46.562380 systemd-journald[2225]: System Journal (/var/log/journal/dd1f4519de4a4fb2a5d79b60c8c238d1) is 8.0M, max 195.6M, 187.6M free. Feb 14 01:46:46.604958 systemd-journald[2225]: Received client request to flush runtime journal. Feb 14 01:46:46.605003 kernel: loop0: detected capacity change from 0 to 114432 Feb 14 01:46:46.562660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 01:46:46.580515 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 01:46:46.586177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 01:46:46.591805 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 01:46:46.618062 systemd-tmpfiles[2267]: ACLs are not supported, ignoring. Feb 14 01:46:46.627341 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 01:46:46.618075 systemd-tmpfiles[2267]: ACLs are not supported, ignoring. Feb 14 01:46:46.627588 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 01:46:46.632175 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 01:46:46.638210 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 01:46:46.642861 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 01:46:46.647503 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 01:46:46.652357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 01:46:46.663555 kernel: loop1: detected capacity change from 0 to 114328 Feb 14 01:46:46.667049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 01:46:46.677686 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 01:46:46.694479 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 01:46:46.700458 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 01:46:46.705186 kernel: loop2: detected capacity change from 0 to 189592 Feb 14 01:46:46.716729 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 01:46:46.729783 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 01:46:46.735718 udevadm[2270]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 14 01:46:46.748472 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 01:46:46.772192 kernel: loop3: detected capacity change from 0 to 8 Feb 14 01:46:46.774420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 01:46:46.786799 systemd-tmpfiles[2305]: ACLs are not supported, ignoring. Feb 14 01:46:46.786812 systemd-tmpfiles[2305]: ACLs are not supported, ignoring. Feb 14 01:46:46.790495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 01:46:46.813087 ldconfig[2253]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 01:46:46.814719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 01:46:46.815186 kernel: loop4: detected capacity change from 0 to 114432 Feb 14 01:46:46.840193 kernel: loop5: detected capacity change from 0 to 114328 Feb 14 01:46:46.856191 kernel: loop6: detected capacity change from 0 to 189592 Feb 14 01:46:46.872187 kernel: loop7: detected capacity change from 0 to 8 Feb 14 01:46:46.872733 (sd-merge)[2309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 14 01:46:46.873133 (sd-merge)[2309]: Merged extensions into '/usr'. Feb 14 01:46:46.875969 systemd[1]: Reloading requested from client PID 2265 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 01:46:46.875981 systemd[1]: Reloading... Feb 14 01:46:46.917189 zram_generator::config[2338]: No configuration found. Feb 14 01:46:47.010729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:46:47.059284 systemd[1]: Reloading finished in 182 ms. Feb 14 01:46:47.088686 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 01:46:47.093545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 01:46:47.117376 systemd[1]: Starting ensure-sysext.service... Feb 14 01:46:47.123122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 01:46:47.129550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 01:46:47.136407 systemd[1]: Reloading requested from client PID 2388 ('systemctl') (unit ensure-sysext.service)... Feb 14 01:46:47.136418 systemd[1]: Reloading... Feb 14 01:46:47.143477 systemd-tmpfiles[2389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 01:46:47.143735 systemd-tmpfiles[2389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 01:46:47.144389 systemd-tmpfiles[2389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 01:46:47.144613 systemd-tmpfiles[2389]: ACLs are not supported, ignoring. Feb 14 01:46:47.144658 systemd-tmpfiles[2389]: ACLs are not supported, ignoring. Feb 14 01:46:47.149028 systemd-tmpfiles[2389]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 01:46:47.149036 systemd-tmpfiles[2389]: Skipping /boot Feb 14 01:46:47.154840 systemd-udevd[2390]: Using default interface naming scheme 'v255'. Feb 14 01:46:47.155963 systemd-tmpfiles[2389]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 01:46:47.155971 systemd-tmpfiles[2389]: Skipping /boot Feb 14 01:46:47.181187 zram_generator::config[2422]: No configuration found. Feb 14 01:46:47.214190 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2457) Feb 14 01:46:47.227187 kernel: IPMI message handler: version 39.2 Feb 14 01:46:47.237188 kernel: ipmi device interface Feb 14 01:46:47.249188 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 14 01:46:47.249219 kernel: ipmi_si: IPMI System Interface driver Feb 14 01:46:47.262169 kernel: ipmi_si: Unable to find any System Interface(s) Feb 14 01:46:47.296511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:46:47.358516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Feb 14 01:46:47.363274 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 14 01:46:47.363427 systemd[1]: Reloading finished in 226 ms. Feb 14 01:46:47.380822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 01:46:47.401528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 01:46:47.421109 systemd[1]: Finished ensure-sysext.service. Feb 14 01:46:47.425926 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 01:46:47.455380 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 01:46:47.461325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 01:46:47.466336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 01:46:47.467366 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 01:46:47.473160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 01:46:47.478973 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 01:46:47.484561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 01:46:47.485527 lvm[2576]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 01:46:47.490074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 01:46:47.492693 augenrules[2592]: No rules Feb 14 01:46:47.494892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 01:46:47.495823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 01:46:47.501561 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 01:46:47.507863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 01:46:47.514454 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 01:46:47.520468 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 01:46:47.525970 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 01:46:47.531504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 01:46:47.536766 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 01:46:47.541724 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 01:46:47.546609 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 01:46:47.551418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 01:46:47.551550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 01:46:47.556543 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 01:46:47.557207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 01:46:47.561946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 01:46:47.562074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 01:46:47.566856 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 01:46:47.566997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 01:46:47.572230 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 01:46:47.577149 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 01:46:47.582657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 01:46:47.596383 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 01:46:47.612367 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 01:46:47.616760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 01:46:47.616826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 01:46:47.618007 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 01:46:47.619685 lvm[2624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 01:46:47.624467 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 01:46:47.629134 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 01:46:47.629561 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 01:46:47.634496 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 01:46:47.653719 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 01:46:47.658910 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 01:46:47.708583 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 01:46:47.713575 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 01:46:47.716483 systemd-resolved[2602]: Positive Trust Anchors: Feb 14 01:46:47.716496 systemd-resolved[2602]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 01:46:47.716527 systemd-resolved[2602]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 01:46:47.720054 systemd-resolved[2602]: Using system hostname 'ci-4081.3.1-a-385c1ddb28'. Feb 14 01:46:47.721428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 01:46:47.722848 systemd-networkd[2600]: lo: Link UP Feb 14 01:46:47.722854 systemd-networkd[2600]: lo: Gained carrier Feb 14 01:46:47.726505 systemd-networkd[2600]: bond0: netdev ready Feb 14 01:46:47.726731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 01:46:47.730980 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 01:46:47.735199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 01:46:47.735652 systemd-networkd[2600]: Enumeration completed Feb 14 01:46:47.739370 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 01:46:47.743752 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 01:46:47.744168 systemd-networkd[2600]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:06:d8.network. Feb 14 01:46:47.748047 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 01:46:47.752343 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 01:46:47.756631 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 01:46:47.756656 systemd[1]: Reached target paths.target - Path Units. Feb 14 01:46:47.760908 systemd[1]: Reached target timers.target - Timer Units. Feb 14 01:46:47.765834 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 01:46:47.771553 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 01:46:47.781503 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 01:46:47.786296 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 01:46:47.790766 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 01:46:47.795168 systemd[1]: Reached target network.target - Network. Feb 14 01:46:47.799465 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 01:46:47.803625 systemd[1]: Reached target basic.target - Basic System. Feb 14 01:46:47.807763 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 01:46:47.807782 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 01:46:47.822276 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 01:46:47.827658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 14 01:46:47.833055 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 01:46:47.838463 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 01:46:47.843910 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 01:46:47.848296 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 01:46:47.849328 coreos-metadata[2658]: Feb 14 01:46:47.849 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 01:46:47.849439 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 01:46:47.849568 jq[2662]: false Feb 14 01:46:47.852680 coreos-metadata[2658]: Feb 14 01:46:47.852 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 01:46:47.854658 dbus-daemon[2659]: [system] SELinux support is enabled Feb 14 01:46:47.854762 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 01:46:47.860110 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 01:46:47.863102 extend-filesystems[2663]: Found loop4 Feb 14 01:46:47.863102 extend-filesystems[2663]: Found loop5 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found loop6 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found loop7 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p1 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p2 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p3 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found usr Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p4 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p6 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p7 Feb 14 01:46:47.877779 extend-filesystems[2663]: Found nvme0n1p9 Feb 14 01:46:47.877779 extend-filesystems[2663]: Checking size of /dev/nvme0n1p9 Feb 14 01:46:47.877779 extend-filesystems[2663]: Resized partition /dev/nvme0n1p9 Feb 14 01:46:48.003213 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Feb 14 01:46:48.003243 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2465) Feb 14 01:46:47.865770 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 01:46:47.992610 dbus-daemon[2659]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 14 01:46:48.003434 extend-filesystems[2682]: resize2fs 1.47.1 (20-May-2024) Feb 14 01:46:47.877670 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 01:46:47.884153 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 01:46:47.923748 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 01:46:47.924368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 01:46:48.013265 update_engine[2691]: I20250214 01:46:47.970736 2691 main.cc:92] Flatcar Update Engine starting Feb 14 01:46:48.013265 update_engine[2691]: I20250214 01:46:47.973132 2691 update_check_scheduler.cc:74] Next update check in 10m50s Feb 14 01:46:47.925009 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 01:46:48.013575 jq[2692]: true Feb 14 01:46:47.931891 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 01:46:47.939996 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 01:46:48.013901 tar[2697]: linux-arm64/helm Feb 14 01:46:47.953104 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 01:46:47.953305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 01:46:48.014276 jq[2698]: true Feb 14 01:46:47.953563 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 01:46:47.953788 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 01:46:47.955309 systemd-logind[2681]: Watching system buttons on /dev/input/event0 (Power Button) Feb 14 01:46:47.957832 systemd-logind[2681]: New seat seat0. Feb 14 01:46:47.963190 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 01:46:47.963353 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 01:46:47.978853 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 01:46:47.982087 (ntainerd)[2699]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 01:46:48.002044 systemd[1]: Started update-engine.service - Update Engine. Feb 14 01:46:48.009195 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 01:46:48.009533 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 01:46:48.017388 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 01:46:48.017495 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 01:46:48.027844 bash[2719]: Updated "/home/core/.ssh/authorized_keys" Feb 14 01:46:48.039427 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 01:46:48.047290 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 01:46:48.057036 systemd[1]: Starting sshkeys.service... Feb 14 01:46:48.070017 locksmithd[2720]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 01:46:48.070779 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 14 01:46:48.076813 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 14 01:46:48.096591 coreos-metadata[2737]: Feb 14 01:46:48.096 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 01:46:48.097777 coreos-metadata[2737]: Feb 14 01:46:48.097 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 01:46:48.128173 containerd[2699]: time="2025-02-14T01:46:48.128067680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 01:46:48.150030 containerd[2699]: time="2025-02-14T01:46:48.149996000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151351 containerd[2699]: time="2025-02-14T01:46:48.151323120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151384 containerd[2699]: time="2025-02-14T01:46:48.151349320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 01:46:48.151384 containerd[2699]: time="2025-02-14T01:46:48.151363200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 01:46:48.151515 containerd[2699]: time="2025-02-14T01:46:48.151499800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 01:46:48.151539 containerd[2699]: time="2025-02-14T01:46:48.151517560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151580 containerd[2699]: time="2025-02-14T01:46:48.151565680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151599 containerd[2699]: time="2025-02-14T01:46:48.151578240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151737 containerd[2699]: time="2025-02-14T01:46:48.151719960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151761 containerd[2699]: time="2025-02-14T01:46:48.151736600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151761 containerd[2699]: time="2025-02-14T01:46:48.151750880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151796 containerd[2699]: time="2025-02-14T01:46:48.151760360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.151840 containerd[2699]: time="2025-02-14T01:46:48.151826600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.152030 containerd[2699]: time="2025-02-14T01:46:48.152013240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 01:46:48.152131 containerd[2699]: time="2025-02-14T01:46:48.152114400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 01:46:48.152152 containerd[2699]: time="2025-02-14T01:46:48.152128800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 01:46:48.152250 containerd[2699]: time="2025-02-14T01:46:48.152237880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 01:46:48.152289 containerd[2699]: time="2025-02-14T01:46:48.152280040Z" level=info msg="metadata content store policy set" policy=shared Feb 14 01:46:48.159049 containerd[2699]: time="2025-02-14T01:46:48.159028200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 01:46:48.159077 containerd[2699]: time="2025-02-14T01:46:48.159067560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 01:46:48.159102 containerd[2699]: time="2025-02-14T01:46:48.159082960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 01:46:48.159102 containerd[2699]: time="2025-02-14T01:46:48.159097840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 01:46:48.159135 containerd[2699]: time="2025-02-14T01:46:48.159112400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 01:46:48.159262 containerd[2699]: time="2025-02-14T01:46:48.159249520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 01:46:48.160106 containerd[2699]: time="2025-02-14T01:46:48.160079680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 01:46:48.160288 containerd[2699]: time="2025-02-14T01:46:48.160273640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 01:46:48.160307 containerd[2699]: time="2025-02-14T01:46:48.160293480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 01:46:48.160330 containerd[2699]: time="2025-02-14T01:46:48.160309000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 01:46:48.160330 containerd[2699]: time="2025-02-14T01:46:48.160324400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160363 containerd[2699]: time="2025-02-14T01:46:48.160337760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160363 containerd[2699]: time="2025-02-14T01:46:48.160351200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160395 containerd[2699]: time="2025-02-14T01:46:48.160367200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160395 containerd[2699]: time="2025-02-14T01:46:48.160381640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160430 containerd[2699]: time="2025-02-14T01:46:48.160394560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160430 containerd[2699]: time="2025-02-14T01:46:48.160407120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160430 containerd[2699]: time="2025-02-14T01:46:48.160418360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 01:46:48.160478 containerd[2699]: time="2025-02-14T01:46:48.160437720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160478 containerd[2699]: time="2025-02-14T01:46:48.160451880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160478 containerd[2699]: time="2025-02-14T01:46:48.160464000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160478 containerd[2699]: time="2025-02-14T01:46:48.160476480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160548 containerd[2699]: time="2025-02-14T01:46:48.160489320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160548 containerd[2699]: time="2025-02-14T01:46:48.160503080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160548 containerd[2699]: time="2025-02-14T01:46:48.160515280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160548 containerd[2699]: time="2025-02-14T01:46:48.160528280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160548 containerd[2699]: time="2025-02-14T01:46:48.160541560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160635 containerd[2699]: time="2025-02-14T01:46:48.160556640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160635 containerd[2699]: time="2025-02-14T01:46:48.160569200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160635 containerd[2699]: time="2025-02-14T01:46:48.160581160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160635 containerd[2699]: time="2025-02-14T01:46:48.160592920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160635 containerd[2699]: time="2025-02-14T01:46:48.160608680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 01:46:48.160635 containerd[2699]: time="2025-02-14T01:46:48.160628680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160733 containerd[2699]: time="2025-02-14T01:46:48.160641880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160733 containerd[2699]: time="2025-02-14T01:46:48.160653680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 01:46:48.160776 containerd[2699]: time="2025-02-14T01:46:48.160766440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 01:46:48.160796 containerd[2699]: time="2025-02-14T01:46:48.160785240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 01:46:48.160815 containerd[2699]: time="2025-02-14T01:46:48.160797240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 01:46:48.160836 containerd[2699]: time="2025-02-14T01:46:48.160810480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 01:46:48.160836 containerd[2699]: time="2025-02-14T01:46:48.160820160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.160836 containerd[2699]: time="2025-02-14T01:46:48.160831760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 01:46:48.160886 containerd[2699]: time="2025-02-14T01:46:48.160843880Z" level=info msg="NRI interface is disabled by configuration." Feb 14 01:46:48.160886 containerd[2699]: time="2025-02-14T01:46:48.160858920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 01:46:48.161238 containerd[2699]: time="2025-02-14T01:46:48.161191160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 01:46:48.161342 containerd[2699]: time="2025-02-14T01:46:48.161246920Z" level=info msg="Connect containerd service" Feb 14 01:46:48.161342 containerd[2699]: time="2025-02-14T01:46:48.161272040Z" level=info msg="using legacy CRI server" Feb 14 01:46:48.161342 containerd[2699]: time="2025-02-14T01:46:48.161278240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 01:46:48.161395 containerd[2699]: time="2025-02-14T01:46:48.161349800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 01:46:48.161952 containerd[2699]: time="2025-02-14T01:46:48.161931560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 01:46:48.162168 containerd[2699]: time="2025-02-14T01:46:48.162132280Z" level=info msg="Start subscribing containerd event" Feb 14 01:46:48.162206 containerd[2699]: time="2025-02-14T01:46:48.162192080Z" level=info msg="Start recovering state" Feb 14 01:46:48.162266 containerd[2699]: time="2025-02-14T01:46:48.162256600Z" level=info msg="Start event monitor" Feb 14 01:46:48.162285 containerd[2699]: time="2025-02-14T01:46:48.162270200Z" level=info msg="Start snapshots syncer" Feb 14 01:46:48.162306 containerd[2699]: time="2025-02-14T01:46:48.162290200Z" level=info msg="Start cni network conf syncer for default" Feb 14 01:46:48.162306 containerd[2699]: time="2025-02-14T01:46:48.162299400Z" level=info msg="Start streaming server" Feb 14 01:46:48.162392 containerd[2699]: time="2025-02-14T01:46:48.162377640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 01:46:48.162428 containerd[2699]: time="2025-02-14T01:46:48.162419520Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 01:46:48.162472 containerd[2699]: time="2025-02-14T01:46:48.162463640Z" level=info msg="containerd successfully booted in 0.035181s" Feb 14 01:46:48.162514 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 01:46:48.289610 tar[2697]: linux-arm64/LICENSE Feb 14 01:46:48.289680 tar[2697]: linux-arm64/README.md Feb 14 01:46:48.299622 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 01:46:48.415191 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Feb 14 01:46:48.431870 extend-filesystems[2682]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 14 01:46:48.431870 extend-filesystems[2682]: old_desc_blocks = 1, new_desc_blocks = 112 Feb 14 01:46:48.431870 extend-filesystems[2682]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Feb 14 01:46:48.461722 extend-filesystems[2663]: Resized filesystem in /dev/nvme0n1p9 Feb 14 01:46:48.461722 extend-filesystems[2663]: Found nvme1n1 Feb 14 01:46:48.434413 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 01:46:48.434714 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 01:46:48.661866 sshd_keygen[2686]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 01:46:48.680588 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 01:46:48.696633 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 01:46:48.705629 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 01:46:48.705810 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 01:46:48.712080 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 01:46:48.724859 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 01:46:48.731002 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 01:46:48.744096 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 14 01:46:48.749313 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 01:46:48.799194 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Feb 14 01:46:48.816188 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Feb 14 01:46:48.820553 systemd-networkd[2600]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:06:d9.network. Feb 14 01:46:48.852816 coreos-metadata[2658]: Feb 14 01:46:48.852 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 14 01:46:48.853190 coreos-metadata[2658]: Feb 14 01:46:48.853 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 01:46:49.097936 coreos-metadata[2737]: Feb 14 01:46:49.097 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 14 01:46:49.098276 coreos-metadata[2737]: Feb 14 01:46:49.098 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 01:46:49.403194 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Feb 14 01:46:49.419962 systemd-networkd[2600]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 14 01:46:49.420184 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Feb 14 01:46:49.421234 systemd-networkd[2600]: enP1p1s0f0np0: Link UP Feb 14 01:46:49.421542 systemd-networkd[2600]: enP1p1s0f0np0: Gained carrier Feb 14 01:46:49.440190 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 14 01:46:49.450656 systemd-networkd[2600]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:06:d8.network. Feb 14 01:46:49.450960 systemd-networkd[2600]: enP1p1s0f1np1: Link UP Feb 14 01:46:49.451222 systemd-networkd[2600]: enP1p1s0f1np1: Gained carrier Feb 14 01:46:49.465465 systemd-networkd[2600]: bond0: Link UP Feb 14 01:46:49.465763 systemd-networkd[2600]: bond0: Gained carrier Feb 14 01:46:49.465932 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:49.466472 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:49.466807 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:49.466959 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:49.547348 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Feb 14 01:46:49.547379 kernel: bond0: active interface up! Feb 14 01:46:49.672194 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 14 01:46:50.696552 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:50.853300 coreos-metadata[2658]: Feb 14 01:46:50.853 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 14 01:46:50.888236 systemd-networkd[2600]: bond0: Gained IPv6LL Feb 14 01:46:50.888464 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:50.890365 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 01:46:50.896103 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 01:46:50.911410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:46:50.917854 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 01:46:50.938925 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 01:46:51.098432 coreos-metadata[2737]: Feb 14 01:46:51.098 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 14 01:46:51.455755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:46:51.461643 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:46:51.847208 kubelet[2801]: E0214 01:46:51.847126 2801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:46:51.849407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:46:51.849543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:46:52.473923 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Feb 14 01:46:52.474210 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Feb 14 01:46:53.588647 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 01:46:53.606418 systemd[1]: Started sshd@0-147.75.62.106:22-139.178.68.195:44356.service - OpenSSH per-connection server daemon (139.178.68.195:44356). Feb 14 01:46:53.786230 login[2781]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 14 01:46:53.787548 login[2782]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:53.795308 systemd-logind[2681]: New session 2 of user core. Feb 14 01:46:53.796661 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 01:46:53.806511 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 01:46:53.814841 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 01:46:53.818232 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 01:46:53.824121 (systemd)[2837]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 01:46:53.925375 coreos-metadata[2658]: Feb 14 01:46:53.925 INFO Fetch successful Feb 14 01:46:53.927930 systemd[2837]: Queued start job for default target default.target. Feb 14 01:46:53.945247 systemd[2837]: Created slice app.slice - User Application Slice. Feb 14 01:46:53.945271 systemd[2837]: Reached target paths.target - Paths. Feb 14 01:46:53.945283 systemd[2837]: Reached target timers.target - Timers. Feb 14 01:46:53.946478 systemd[2837]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 01:46:53.955129 systemd[2837]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 01:46:53.955191 systemd[2837]: Reached target sockets.target - Sockets. Feb 14 01:46:53.955204 systemd[2837]: Reached target basic.target - Basic System. Feb 14 01:46:53.955244 systemd[2837]: Reached target default.target - Main User Target. Feb 14 01:46:53.955266 systemd[2837]: Startup finished in 126ms. Feb 14 01:46:53.955685 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 01:46:53.957192 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 01:46:53.980221 coreos-metadata[2737]: Feb 14 01:46:53.980 INFO Fetch successful Feb 14 01:46:53.982734 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 14 01:46:53.984680 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 14 01:46:54.023130 sshd[2827]: Accepted publickey for core from 139.178.68.195 port 44356 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:54.024550 sshd[2827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:54.027531 systemd-logind[2681]: New session 3 of user core. Feb 14 01:46:54.036647 unknown[2737]: wrote ssh authorized keys file for user: core Feb 14 01:46:54.038323 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 01:46:54.053688 update-ssh-keys[2864]: Updated "/home/core/.ssh/authorized_keys" Feb 14 01:46:54.054833 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 14 01:46:54.056325 systemd[1]: Finished sshkeys.service. Feb 14 01:46:54.352725 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 14 01:46:54.353170 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 01:46:54.354267 systemd[1]: Startup finished in 3.219s (kernel) + 21.020s (initrd) + 9.976s (userspace) = 34.217s. Feb 14 01:46:54.395628 systemd[1]: Started sshd@1-147.75.62.106:22-139.178.68.195:44366.service - OpenSSH per-connection server daemon (139.178.68.195:44366). Feb 14 01:46:54.786931 login[2781]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:54.791093 systemd-logind[2681]: New session 1 of user core. Feb 14 01:46:54.800304 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 01:46:54.812291 sshd[2872]: Accepted publickey for core from 139.178.68.195 port 44366 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:54.813450 sshd[2872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:54.815881 systemd-logind[2681]: New session 4 of user core. Feb 14 01:46:54.828356 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 01:46:55.117722 sshd[2872]: pam_unix(sshd:session): session closed for user core Feb 14 01:46:55.121550 systemd[1]: sshd@1-147.75.62.106:22-139.178.68.195:44366.service: Deactivated successfully. Feb 14 01:46:55.123937 systemd[1]: session-4.scope: Deactivated successfully. Feb 14 01:46:55.124447 systemd-logind[2681]: Session 4 logged out. Waiting for processes to exit. Feb 14 01:46:55.124983 systemd-logind[2681]: Removed session 4. Feb 14 01:46:55.186539 systemd[1]: Started sshd@2-147.75.62.106:22-139.178.68.195:44370.service - OpenSSH per-connection server daemon (139.178.68.195:44370). Feb 14 01:46:55.584698 sshd[2890]: Accepted publickey for core from 139.178.68.195 port 44370 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:55.585749 sshd[2890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:55.588510 systemd-logind[2681]: New session 5 of user core. Feb 14 01:46:55.600340 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 01:46:55.873536 sshd[2890]: pam_unix(sshd:session): session closed for user core Feb 14 01:46:55.876986 systemd[1]: sshd@2-147.75.62.106:22-139.178.68.195:44370.service: Deactivated successfully. Feb 14 01:46:55.878815 systemd[1]: session-5.scope: Deactivated successfully. Feb 14 01:46:55.879417 systemd-logind[2681]: Session 5 logged out. Waiting for processes to exit. Feb 14 01:46:55.879946 systemd-logind[2681]: Removed session 5. Feb 14 01:46:55.952509 systemd[1]: Started sshd@3-147.75.62.106:22-139.178.68.195:44374.service - OpenSSH per-connection server daemon (139.178.68.195:44374). Feb 14 01:46:56.361699 sshd[2897]: Accepted publickey for core from 139.178.68.195 port 44374 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:56.362750 sshd[2897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:56.365623 systemd-logind[2681]: New session 6 of user core. Feb 14 01:46:56.379302 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 01:46:56.661050 sshd[2897]: pam_unix(sshd:session): session closed for user core Feb 14 01:46:56.664615 systemd[1]: sshd@3-147.75.62.106:22-139.178.68.195:44374.service: Deactivated successfully. Feb 14 01:46:56.666704 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 01:46:56.667207 systemd-logind[2681]: Session 6 logged out. Waiting for processes to exit. Feb 14 01:46:56.667751 systemd-logind[2681]: Removed session 6. Feb 14 01:46:56.730452 systemd[1]: Started sshd@4-147.75.62.106:22-139.178.68.195:44382.service - OpenSSH per-connection server daemon (139.178.68.195:44382). Feb 14 01:46:57.135471 sshd[2904]: Accepted publickey for core from 139.178.68.195 port 44382 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:57.136582 sshd[2904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:57.139229 systemd-logind[2681]: New session 7 of user core. Feb 14 01:46:57.151338 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 01:46:57.388958 sudo[2907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 14 01:46:57.389222 sudo[2907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:46:57.402923 sudo[2907]: pam_unix(sudo:session): session closed for user root Feb 14 01:46:57.466829 sshd[2904]: pam_unix(sshd:session): session closed for user core Feb 14 01:46:57.470814 systemd[1]: sshd@4-147.75.62.106:22-139.178.68.195:44382.service: Deactivated successfully. Feb 14 01:46:57.472597 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 01:46:57.473700 systemd-logind[2681]: Session 7 logged out. Waiting for processes to exit. Feb 14 01:46:57.474285 systemd-logind[2681]: Removed session 7. Feb 14 01:46:57.536566 systemd[1]: Started sshd@5-147.75.62.106:22-139.178.68.195:38362.service - OpenSSH per-connection server daemon (139.178.68.195:38362). Feb 14 01:46:57.933319 sshd[2914]: Accepted publickey for core from 139.178.68.195 port 38362 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:57.934531 sshd[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:57.937361 systemd-logind[2681]: New session 8 of user core. Feb 14 01:46:57.950298 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 01:46:58.161131 sudo[2918]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 14 01:46:58.161399 sudo[2918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:46:58.163889 sudo[2918]: pam_unix(sudo:session): session closed for user root Feb 14 01:46:58.168340 sudo[2917]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 14 01:46:58.168600 sudo[2917]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:46:58.187362 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 14 01:46:58.188462 auditctl[2921]: No rules Feb 14 01:46:58.189291 systemd[1]: audit-rules.service: Deactivated successfully. Feb 14 01:46:58.191227 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 14 01:46:58.192929 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 01:46:58.215560 augenrules[2939]: No rules Feb 14 01:46:58.216813 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 01:46:58.217634 sudo[2917]: pam_unix(sudo:session): session closed for user root Feb 14 01:46:58.279418 sshd[2914]: pam_unix(sshd:session): session closed for user core Feb 14 01:46:58.282924 systemd[1]: sshd@5-147.75.62.106:22-139.178.68.195:38362.service: Deactivated successfully. Feb 14 01:46:58.284540 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 01:46:58.285017 systemd-logind[2681]: Session 8 logged out. Waiting for processes to exit. Feb 14 01:46:58.285577 systemd-logind[2681]: Removed session 8. Feb 14 01:46:58.356353 systemd[1]: Started sshd@6-147.75.62.106:22-139.178.68.195:38366.service - OpenSSH per-connection server daemon (139.178.68.195:38366). Feb 14 01:46:58.777797 sshd[2947]: Accepted publickey for core from 139.178.68.195 port 38366 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:46:58.778850 sshd[2947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:46:58.781627 systemd-logind[2681]: New session 9 of user core. Feb 14 01:46:58.793290 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 01:46:59.019401 sudo[2950]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 01:46:59.019667 sudo[2950]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 01:46:59.301366 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 01:46:59.301506 (dockerd)[2981]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 01:46:59.511722 dockerd[2981]: time="2025-02-14T01:46:59.511677760Z" level=info msg="Starting up" Feb 14 01:46:59.570051 dockerd[2981]: time="2025-02-14T01:46:59.569989720Z" level=info msg="Loading containers: start." Feb 14 01:46:59.651192 kernel: Initializing XFRM netlink socket Feb 14 01:46:59.668999 systemd-timesyncd[2603]: Network configuration changed, trying to establish connection. Feb 14 01:46:59.725118 systemd-networkd[2600]: docker0: Link UP Feb 14 01:46:59.740283 dockerd[2981]: time="2025-02-14T01:46:59.740247920Z" level=info msg="Loading containers: done." Feb 14 01:46:59.749135 dockerd[2981]: time="2025-02-14T01:46:59.749106880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 01:46:59.749210 dockerd[2981]: time="2025-02-14T01:46:59.749187160Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 01:46:59.749300 dockerd[2981]: time="2025-02-14T01:46:59.749285880Z" level=info msg="Daemon has completed initialization" Feb 14 01:46:59.769252 dockerd[2981]: time="2025-02-14T01:46:59.769128400Z" level=info msg="API listen on /run/docker.sock" Feb 14 01:46:59.769290 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 01:46:59.607658 systemd-resolved[2602]: Clock change detected. Flushing caches. Feb 14 01:46:59.615671 systemd-journald[2225]: Time jumped backwards, rotating. Feb 14 01:46:59.607864 systemd-timesyncd[2603]: Contacted time server [2604:2dc0:202:300::13ac]:123 (2.flatcar.pool.ntp.org). Feb 14 01:46:59.607914 systemd-timesyncd[2603]: Initial clock synchronization to Fri 2025-02-14 01:46:59.607594 UTC. Feb 14 01:46:59.791338 containerd[2699]: time="2025-02-14T01:46:59.791302853Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 14 01:47:00.008516 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3482800843-merged.mount: Deactivated successfully. Feb 14 01:47:00.330730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784329383.mount: Deactivated successfully. Feb 14 01:47:01.545419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 01:47:01.554888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:01.648943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:01.652567 (kubelet)[3241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:47:01.687637 kubelet[3241]: E0214 01:47:01.687604 3241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:47:01.690528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:47:01.690665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:47:02.852531 containerd[2699]: time="2025-02-14T01:47:02.852489053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:02.852855 containerd[2699]: time="2025-02-14T01:47:02.852511053Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 14 01:47:02.853587 containerd[2699]: time="2025-02-14T01:47:02.853567173Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:02.856372 containerd[2699]: time="2025-02-14T01:47:02.856346973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:02.857488 containerd[2699]: time="2025-02-14T01:47:02.857457973Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 3.06611432s" Feb 14 01:47:02.857508 containerd[2699]: time="2025-02-14T01:47:02.857497693Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 14 01:47:02.858119 containerd[2699]: time="2025-02-14T01:47:02.858097413Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 14 01:47:04.493980 containerd[2699]: time="2025-02-14T01:47:04.493945533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:04.494170 containerd[2699]: time="2025-02-14T01:47:04.494023453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 14 01:47:04.495040 containerd[2699]: time="2025-02-14T01:47:04.495014333Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:04.497849 containerd[2699]: time="2025-02-14T01:47:04.497830493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:04.498911 containerd[2699]: time="2025-02-14T01:47:04.498884813Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.64075604s" Feb 14 01:47:04.498946 containerd[2699]: time="2025-02-14T01:47:04.498916293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 14 01:47:04.499281 containerd[2699]: time="2025-02-14T01:47:04.499258653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 14 01:47:05.714254 containerd[2699]: time="2025-02-14T01:47:05.714163573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:05.714254 containerd[2699]: time="2025-02-14T01:47:05.714199133Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 14 01:47:05.715378 containerd[2699]: time="2025-02-14T01:47:05.715327493Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:05.718186 containerd[2699]: time="2025-02-14T01:47:05.718156133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:05.719374 containerd[2699]: time="2025-02-14T01:47:05.719266973Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.219975s" Feb 14 01:47:05.719374 containerd[2699]: time="2025-02-14T01:47:05.719306453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 14 01:47:05.719628 containerd[2699]: time="2025-02-14T01:47:05.719607693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 14 01:47:06.325509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191640085.mount: Deactivated successfully. Feb 14 01:47:07.044016 containerd[2699]: time="2025-02-14T01:47:07.043972013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:07.044349 containerd[2699]: time="2025-02-14T01:47:07.044025333Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 14 01:47:07.044763 containerd[2699]: time="2025-02-14T01:47:07.044732573Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:07.046507 containerd[2699]: time="2025-02-14T01:47:07.046484973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:07.047208 containerd[2699]: time="2025-02-14T01:47:07.047180293Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.32754544s" Feb 14 01:47:07.047233 containerd[2699]: time="2025-02-14T01:47:07.047216013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 14 01:47:07.048097 containerd[2699]: time="2025-02-14T01:47:07.048056253Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 14 01:47:07.421539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745416103.mount: Deactivated successfully. Feb 14 01:47:07.870867 containerd[2699]: time="2025-02-14T01:47:07.870800493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:07.871581 containerd[2699]: time="2025-02-14T01:47:07.870829293Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 14 01:47:07.872871 containerd[2699]: time="2025-02-14T01:47:07.872787693Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:07.876314 containerd[2699]: time="2025-02-14T01:47:07.876287613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:07.877525 containerd[2699]: time="2025-02-14T01:47:07.877495253Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 829.39272ms" Feb 14 01:47:07.877566 containerd[2699]: time="2025-02-14T01:47:07.877529573Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 14 01:47:07.877881 containerd[2699]: time="2025-02-14T01:47:07.877857333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 14 01:47:08.145612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394680345.mount: Deactivated successfully. Feb 14 01:47:08.146180 containerd[2699]: time="2025-02-14T01:47:08.146152413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:08.146390 containerd[2699]: time="2025-02-14T01:47:08.146252693Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 14 01:47:08.146895 containerd[2699]: time="2025-02-14T01:47:08.146876013Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:08.148900 containerd[2699]: time="2025-02-14T01:47:08.148875533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:08.149743 containerd[2699]: time="2025-02-14T01:47:08.149722213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 271.834ms" Feb 14 01:47:08.149770 containerd[2699]: time="2025-02-14T01:47:08.149754533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 14 01:47:08.150041 containerd[2699]: time="2025-02-14T01:47:08.150023853Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 14 01:47:08.415234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792103668.mount: Deactivated successfully. Feb 14 01:47:09.938221 containerd[2699]: time="2025-02-14T01:47:09.938173133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:09.938549 containerd[2699]: time="2025-02-14T01:47:09.938198333Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 14 01:47:09.939334 containerd[2699]: time="2025-02-14T01:47:09.939311893Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:09.942552 containerd[2699]: time="2025-02-14T01:47:09.942524493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:09.943865 containerd[2699]: time="2025-02-14T01:47:09.943839853Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.79378804s" Feb 14 01:47:09.943894 containerd[2699]: time="2025-02-14T01:47:09.943873253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 14 01:47:11.804501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 01:47:11.814303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:11.908969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:11.912663 (kubelet)[3464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 01:47:11.957998 kubelet[3464]: E0214 01:47:11.957961 3464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 01:47:11.960611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 01:47:11.960759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 01:47:14.180015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:14.194104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:14.212187 systemd[1]: Reloading requested from client PID 3493 ('systemctl') (unit session-9.scope)... Feb 14 01:47:14.212198 systemd[1]: Reloading... Feb 14 01:47:14.276753 zram_generator::config[3536]: No configuration found. Feb 14 01:47:14.367235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:47:14.438265 systemd[1]: Reloading finished in 225 ms. Feb 14 01:47:14.480368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:14.482731 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 01:47:14.482949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:14.484455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:14.583120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:14.586753 (kubelet)[3601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 01:47:14.616267 kubelet[3601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:47:14.616267 kubelet[3601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 01:47:14.616267 kubelet[3601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:47:14.616513 kubelet[3601]: I0214 01:47:14.616445 3601 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 01:47:15.986732 kubelet[3601]: I0214 01:47:15.986699 3601 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 14 01:47:15.986732 kubelet[3601]: I0214 01:47:15.986725 3601 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 01:47:15.987116 kubelet[3601]: I0214 01:47:15.986932 3601 server.go:929] "Client rotation is on, will bootstrap in background" Feb 14 01:47:16.009037 kubelet[3601]: E0214 01:47:16.009010 3601 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.62.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.62.106:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:47:16.009628 kubelet[3601]: I0214 01:47:16.009618 3601 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:47:16.014801 kubelet[3601]: E0214 01:47:16.014779 3601 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 01:47:16.014828 kubelet[3601]: I0214 01:47:16.014800 3601 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 01:47:16.034765 kubelet[3601]: I0214 01:47:16.034733 3601 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 01:47:16.037751 kubelet[3601]: I0214 01:47:16.037729 3601 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 01:47:16.037895 kubelet[3601]: I0214 01:47:16.037867 3601 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 01:47:16.038044 kubelet[3601]: I0214 01:47:16.037896 3601 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-385c1ddb28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 01:47:16.038190 kubelet[3601]: I0214 01:47:16.038180 3601 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 01:47:16.038190 kubelet[3601]: I0214 01:47:16.038190 3601 container_manager_linux.go:300] "Creating device plugin manager" Feb 14 01:47:16.038379 kubelet[3601]: I0214 01:47:16.038369 3601 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:47:16.040314 kubelet[3601]: I0214 01:47:16.040299 3601 kubelet.go:408] "Attempting to sync node with API server" Feb 14 01:47:16.040337 kubelet[3601]: I0214 01:47:16.040320 3601 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 01:47:16.040359 kubelet[3601]: I0214 01:47:16.040344 3601 kubelet.go:314] "Adding apiserver pod source" Feb 14 01:47:16.040359 kubelet[3601]: I0214 01:47:16.040354 3601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 01:47:16.041675 kubelet[3601]: W0214 01:47:16.041634 3601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.62.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-385c1ddb28&limit=500&resourceVersion=0": dial tcp 147.75.62.106:6443: connect: connection refused Feb 14 01:47:16.041708 kubelet[3601]: E0214 01:47:16.041693 3601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.62.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-385c1ddb28&limit=500&resourceVersion=0\": dial tcp 147.75.62.106:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:47:16.042315 kubelet[3601]: I0214 01:47:16.042296 3601 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 01:47:16.044151 kubelet[3601]: I0214 01:47:16.044131 3601 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 01:47:16.044836 kubelet[3601]: W0214 01:47:16.044798 3601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.62.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.62.106:6443: connect: connection refused Feb 14 01:47:16.044860 kubelet[3601]: E0214 01:47:16.044845 3601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.62.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.62.106:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:47:16.044946 kubelet[3601]: W0214 01:47:16.044934 3601 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 01:47:16.045537 kubelet[3601]: I0214 01:47:16.045524 3601 server.go:1269] "Started kubelet" Feb 14 01:47:16.045616 kubelet[3601]: I0214 01:47:16.045584 3601 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 01:47:16.045773 kubelet[3601]: I0214 01:47:16.045731 3601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 01:47:16.045985 kubelet[3601]: I0214 01:47:16.045972 3601 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 01:47:16.046694 kubelet[3601]: I0214 01:47:16.046679 3601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 01:47:16.046729 kubelet[3601]: I0214 01:47:16.046711 3601 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 01:47:16.050970 kubelet[3601]: I0214 01:47:16.050907 3601 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 01:47:16.051000 kubelet[3601]: I0214 01:47:16.050967 3601 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 01:47:16.051083 kubelet[3601]: I0214 01:47:16.051067 3601 reconciler.go:26] "Reconciler: start to sync state" Feb 14 01:47:16.051224 kubelet[3601]: E0214 01:47:16.051162 3601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.62.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-385c1ddb28?timeout=10s\": dial tcp 147.75.62.106:6443: connect: connection refused" interval="200ms" Feb 14 01:47:16.051359 kubelet[3601]: W0214 01:47:16.051313 3601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.62.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.62.106:6443: connect: connection refused Feb 14 01:47:16.051407 kubelet[3601]: E0214 01:47:16.051384 3601 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-385c1ddb28\" not found" Feb 14 01:47:16.051429 kubelet[3601]: E0214 01:47:16.051395 3601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.62.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.62.106:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:47:16.051565 kubelet[3601]: I0214 01:47:16.051550 3601 factory.go:221] Registration of the systemd container factory successfully Feb 14 01:47:16.053786 kubelet[3601]: I0214 01:47:16.053761 3601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 01:47:16.053920 kubelet[3601]: I0214 01:47:16.053906 3601 server.go:460] "Adding debug handlers to kubelet server" Feb 14 01:47:16.054524 kubelet[3601]: I0214 01:47:16.054508 3601 factory.go:221] Registration of the containerd container factory successfully Feb 14 01:47:16.054801 kubelet[3601]: E0214 01:47:16.053773 3601 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.62.106:6443/api/v1/namespaces/default/events\": dial tcp 147.75.62.106:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-385c1ddb28.1823efe23c1fba4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-385c1ddb28,UID:ci-4081.3.1-a-385c1ddb28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-385c1ddb28,},FirstTimestamp:2025-02-14 01:47:16.045503053 +0000 UTC m=+1.455885721,LastTimestamp:2025-02-14 01:47:16.045503053 +0000 UTC m=+1.455885721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-385c1ddb28,}" Feb 14 01:47:16.055093 kubelet[3601]: E0214 01:47:16.055077 3601 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 01:47:16.065422 kubelet[3601]: I0214 01:47:16.065393 3601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 01:47:16.066389 kubelet[3601]: I0214 01:47:16.066378 3601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 01:47:16.066411 kubelet[3601]: I0214 01:47:16.066394 3601 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 01:47:16.066411 kubelet[3601]: I0214 01:47:16.066411 3601 kubelet.go:2321] "Starting kubelet main sync loop" Feb 14 01:47:16.066461 kubelet[3601]: E0214 01:47:16.066446 3601 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 01:47:16.066853 kubelet[3601]: W0214 01:47:16.066808 3601 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.62.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.62.106:6443: connect: connection refused Feb 14 01:47:16.066883 kubelet[3601]: E0214 01:47:16.066868 3601 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.62.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.62.106:6443: connect: connection refused" logger="UnhandledError" Feb 14 01:47:16.070229 kubelet[3601]: I0214 01:47:16.070213 3601 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 01:47:16.070229 kubelet[3601]: I0214 01:47:16.070225 3601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 01:47:16.070269 kubelet[3601]: I0214 01:47:16.070240 3601 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:47:16.071131 kubelet[3601]: I0214 01:47:16.071117 3601 policy_none.go:49] "None policy: Start" Feb 14 01:47:16.071492 kubelet[3601]: I0214 01:47:16.071475 3601 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 01:47:16.071517 kubelet[3601]: I0214 01:47:16.071501 3601 state_mem.go:35] "Initializing new in-memory state store" Feb 14 01:47:16.076505 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 14 01:47:16.099788 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 14 01:47:16.102183 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 14 01:47:16.114880 kubelet[3601]: I0214 01:47:16.114852 3601 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 01:47:16.115061 kubelet[3601]: I0214 01:47:16.115049 3601 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 01:47:16.115091 kubelet[3601]: I0214 01:47:16.115061 3601 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 01:47:16.115258 kubelet[3601]: I0214 01:47:16.115240 3601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 01:47:16.115863 kubelet[3601]: E0214 01:47:16.115847 3601 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-385c1ddb28\" not found" Feb 14 01:47:16.173088 systemd[1]: Created slice kubepods-burstable-pod73782956d00c4d7adfb741fcee708c2f.slice - libcontainer container kubepods-burstable-pod73782956d00c4d7adfb741fcee708c2f.slice. Feb 14 01:47:16.184277 systemd[1]: Created slice kubepods-burstable-pod07220b1d1f253e7a5f2669b4959a7fd7.slice - libcontainer container kubepods-burstable-pod07220b1d1f253e7a5f2669b4959a7fd7.slice. Feb 14 01:47:16.201723 systemd[1]: Created slice kubepods-burstable-pod68dd272c3f8f4acc26abaf13f608c6f4.slice - libcontainer container kubepods-burstable-pod68dd272c3f8f4acc26abaf13f608c6f4.slice. Feb 14 01:47:16.217224 kubelet[3601]: I0214 01:47:16.217199 3601 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.217591 kubelet[3601]: E0214 01:47:16.217564 3601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.62.106:6443/api/v1/nodes\": dial tcp 147.75.62.106:6443: connect: connection refused" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.252005 kubelet[3601]: E0214 01:47:16.251941 3601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.62.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-385c1ddb28?timeout=10s\": dial tcp 147.75.62.106:6443: connect: connection refused" interval="400ms" Feb 14 01:47:16.352191 kubelet[3601]: I0214 01:47:16.352164 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73782956d00c4d7adfb741fcee708c2f-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" (UID: \"73782956d00c4d7adfb741fcee708c2f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352233 kubelet[3601]: I0214 01:47:16.352194 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73782956d00c4d7adfb741fcee708c2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" (UID: \"73782956d00c4d7adfb741fcee708c2f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352233 kubelet[3601]: I0214 01:47:16.352218 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352317 kubelet[3601]: I0214 01:47:16.352290 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73782956d00c4d7adfb741fcee708c2f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" (UID: \"73782956d00c4d7adfb741fcee708c2f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352357 kubelet[3601]: I0214 01:47:16.352337 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352380 kubelet[3601]: I0214 01:47:16.352369 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352413 kubelet[3601]: I0214 01:47:16.352398 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352446 kubelet[3601]: I0214 01:47:16.352428 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.352470 kubelet[3601]: I0214 01:47:16.352459 3601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68dd272c3f8f4acc26abaf13f608c6f4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-385c1ddb28\" (UID: \"68dd272c3f8f4acc26abaf13f608c6f4\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.420277 kubelet[3601]: I0214 01:47:16.420249 3601 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.420574 kubelet[3601]: E0214 01:47:16.420546 3601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.62.106:6443/api/v1/nodes\": dial tcp 147.75.62.106:6443: connect: connection refused" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.483814 containerd[2699]: time="2025-02-14T01:47:16.483769653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-385c1ddb28,Uid:73782956d00c4d7adfb741fcee708c2f,Namespace:kube-system,Attempt:0,}" Feb 14 01:47:16.504228 containerd[2699]: time="2025-02-14T01:47:16.504175013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-385c1ddb28,Uid:07220b1d1f253e7a5f2669b4959a7fd7,Namespace:kube-system,Attempt:0,}" Feb 14 01:47:16.504344 containerd[2699]: time="2025-02-14T01:47:16.504320173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-385c1ddb28,Uid:68dd272c3f8f4acc26abaf13f608c6f4,Namespace:kube-system,Attempt:0,}" Feb 14 01:47:16.652571 kubelet[3601]: E0214 01:47:16.652528 3601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.62.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-385c1ddb28?timeout=10s\": dial tcp 147.75.62.106:6443: connect: connection refused" interval="800ms" Feb 14 01:47:16.800208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572022327.mount: Deactivated successfully. Feb 14 01:47:16.800697 containerd[2699]: time="2025-02-14T01:47:16.800656533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:47:16.801368 containerd[2699]: time="2025-02-14T01:47:16.801341653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 14 01:47:16.801542 containerd[2699]: time="2025-02-14T01:47:16.801522093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:47:16.801964 containerd[2699]: time="2025-02-14T01:47:16.801937813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 01:47:16.802086 containerd[2699]: time="2025-02-14T01:47:16.802065933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 01:47:16.802379 containerd[2699]: time="2025-02-14T01:47:16.802354253Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:47:16.805663 containerd[2699]: time="2025-02-14T01:47:16.805635573Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:47:16.806433 containerd[2699]: time="2025-02-14T01:47:16.806414053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 322.539ms" Feb 14 01:47:16.808202 containerd[2699]: time="2025-02-14T01:47:16.808176933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 01:47:16.809009 containerd[2699]: time="2025-02-14T01:47:16.808986493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 304.61736ms" Feb 14 01:47:16.809579 containerd[2699]: time="2025-02-14T01:47:16.809553373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 305.30416ms" Feb 14 01:47:16.822750 kubelet[3601]: I0214 01:47:16.822726 3601 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.823006 kubelet[3601]: E0214 01:47:16.822979 3601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.62.106:6443/api/v1/nodes\": dial tcp 147.75.62.106:6443: connect: connection refused" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:16.914127 containerd[2699]: time="2025-02-14T01:47:16.914066493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:16.914155 containerd[2699]: time="2025-02-14T01:47:16.914123093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:16.914155 containerd[2699]: time="2025-02-14T01:47:16.914135493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:16.914254 containerd[2699]: time="2025-02-14T01:47:16.914146373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:16.914276 containerd[2699]: time="2025-02-14T01:47:16.914259093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:16.914296 containerd[2699]: time="2025-02-14T01:47:16.914271053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:16.914661 containerd[2699]: time="2025-02-14T01:47:16.914605613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:16.914685 containerd[2699]: time="2025-02-14T01:47:16.914663573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:16.914685 containerd[2699]: time="2025-02-14T01:47:16.914675453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:16.914863 containerd[2699]: time="2025-02-14T01:47:16.914842173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:16.914863 containerd[2699]: time="2025-02-14T01:47:16.914841333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:16.914903 containerd[2699]: time="2025-02-14T01:47:16.914863933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:16.938903 systemd[1]: Started cri-containerd-5abeab249f5c80e17988f694d667361f8312bc1b7e9ffa157f70c88e5342dbd1.scope - libcontainer container 5abeab249f5c80e17988f694d667361f8312bc1b7e9ffa157f70c88e5342dbd1. Feb 14 01:47:16.940144 systemd[1]: Started cri-containerd-9184ab5a2d51c5f0ec3d4ab388905083cb696c9b38c44274b426ac462dab8c36.scope - libcontainer container 9184ab5a2d51c5f0ec3d4ab388905083cb696c9b38c44274b426ac462dab8c36. Feb 14 01:47:16.941398 systemd[1]: Started cri-containerd-b56a97ebca1b2d6cc97e3377ed62b504c87427d1c8a0ca6e07c23a79aa2b6169.scope - libcontainer container b56a97ebca1b2d6cc97e3377ed62b504c87427d1c8a0ca6e07c23a79aa2b6169. Feb 14 01:47:16.962382 containerd[2699]: time="2025-02-14T01:47:16.962349933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-385c1ddb28,Uid:07220b1d1f253e7a5f2669b4959a7fd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5abeab249f5c80e17988f694d667361f8312bc1b7e9ffa157f70c88e5342dbd1\"" Feb 14 01:47:16.963385 containerd[2699]: time="2025-02-14T01:47:16.963356733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-385c1ddb28,Uid:68dd272c3f8f4acc26abaf13f608c6f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9184ab5a2d51c5f0ec3d4ab388905083cb696c9b38c44274b426ac462dab8c36\"" Feb 14 01:47:16.964223 containerd[2699]: time="2025-02-14T01:47:16.964200293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-385c1ddb28,Uid:73782956d00c4d7adfb741fcee708c2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b56a97ebca1b2d6cc97e3377ed62b504c87427d1c8a0ca6e07c23a79aa2b6169\"" Feb 14 01:47:16.964721 containerd[2699]: time="2025-02-14T01:47:16.964700573Z" level=info msg="CreateContainer within sandbox \"5abeab249f5c80e17988f694d667361f8312bc1b7e9ffa157f70c88e5342dbd1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 01:47:16.964952 containerd[2699]: time="2025-02-14T01:47:16.964930133Z" level=info msg="CreateContainer within sandbox \"9184ab5a2d51c5f0ec3d4ab388905083cb696c9b38c44274b426ac462dab8c36\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 01:47:16.965804 containerd[2699]: time="2025-02-14T01:47:16.965781973Z" level=info msg="CreateContainer within sandbox \"b56a97ebca1b2d6cc97e3377ed62b504c87427d1c8a0ca6e07c23a79aa2b6169\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 01:47:16.972229 containerd[2699]: time="2025-02-14T01:47:16.972194933Z" level=info msg="CreateContainer within sandbox \"5abeab249f5c80e17988f694d667361f8312bc1b7e9ffa157f70c88e5342dbd1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31c755a8226673639c4e428b7f02e8bdd2e413cee86093004a68c45ff0dae4ca\"" Feb 14 01:47:16.972598 containerd[2699]: time="2025-02-14T01:47:16.972574053Z" level=info msg="CreateContainer within sandbox \"9184ab5a2d51c5f0ec3d4ab388905083cb696c9b38c44274b426ac462dab8c36\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90aa429a4462420635d950489d50a6cc382e150c9733b277625e83e3a698af6f\"" Feb 14 01:47:16.972650 containerd[2699]: time="2025-02-14T01:47:16.972631933Z" level=info msg="StartContainer for \"31c755a8226673639c4e428b7f02e8bdd2e413cee86093004a68c45ff0dae4ca\"" Feb 14 01:47:16.972848 containerd[2699]: time="2025-02-14T01:47:16.972832653Z" level=info msg="StartContainer for \"90aa429a4462420635d950489d50a6cc382e150c9733b277625e83e3a698af6f\"" Feb 14 01:47:16.973213 containerd[2699]: time="2025-02-14T01:47:16.973189453Z" level=info msg="CreateContainer within sandbox \"b56a97ebca1b2d6cc97e3377ed62b504c87427d1c8a0ca6e07c23a79aa2b6169\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f66bf2701b69d29e48106c1c1d37c703494f7716983aa6dfa64f57db08d9570\"" Feb 14 01:47:16.973498 containerd[2699]: time="2025-02-14T01:47:16.973474333Z" level=info msg="StartContainer for \"5f66bf2701b69d29e48106c1c1d37c703494f7716983aa6dfa64f57db08d9570\"" Feb 14 01:47:17.008925 systemd[1]: Started cri-containerd-31c755a8226673639c4e428b7f02e8bdd2e413cee86093004a68c45ff0dae4ca.scope - libcontainer container 31c755a8226673639c4e428b7f02e8bdd2e413cee86093004a68c45ff0dae4ca. Feb 14 01:47:17.010047 systemd[1]: Started cri-containerd-5f66bf2701b69d29e48106c1c1d37c703494f7716983aa6dfa64f57db08d9570.scope - libcontainer container 5f66bf2701b69d29e48106c1c1d37c703494f7716983aa6dfa64f57db08d9570. Feb 14 01:47:17.011143 systemd[1]: Started cri-containerd-90aa429a4462420635d950489d50a6cc382e150c9733b277625e83e3a698af6f.scope - libcontainer container 90aa429a4462420635d950489d50a6cc382e150c9733b277625e83e3a698af6f. Feb 14 01:47:17.033540 containerd[2699]: time="2025-02-14T01:47:17.033504653Z" level=info msg="StartContainer for \"31c755a8226673639c4e428b7f02e8bdd2e413cee86093004a68c45ff0dae4ca\" returns successfully" Feb 14 01:47:17.034321 containerd[2699]: time="2025-02-14T01:47:17.034298853Z" level=info msg="StartContainer for \"5f66bf2701b69d29e48106c1c1d37c703494f7716983aa6dfa64f57db08d9570\" returns successfully" Feb 14 01:47:17.036098 containerd[2699]: time="2025-02-14T01:47:17.036072373Z" level=info msg="StartContainer for \"90aa429a4462420635d950489d50a6cc382e150c9733b277625e83e3a698af6f\" returns successfully" Feb 14 01:47:17.625762 kubelet[3601]: I0214 01:47:17.625725 3601 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:18.385444 kubelet[3601]: E0214 01:47:18.385411 3601 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-385c1ddb28\" not found" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:18.494565 kubelet[3601]: I0214 01:47:18.494508 3601 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:19.042486 kubelet[3601]: I0214 01:47:19.042458 3601 apiserver.go:52] "Watching apiserver" Feb 14 01:47:19.051590 kubelet[3601]: I0214 01:47:19.051569 3601 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 01:47:19.180276 kubelet[3601]: E0214 01:47:19.180243 3601 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:20.523060 systemd[1]: Reloading requested from client PID 4020 ('systemctl') (unit session-9.scope)... Feb 14 01:47:20.523070 systemd[1]: Reloading... Feb 14 01:47:20.590757 zram_generator::config[4062]: No configuration found. Feb 14 01:47:20.680870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 01:47:20.763114 systemd[1]: Reloading finished in 239 ms. Feb 14 01:47:20.798791 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:20.811635 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 01:47:20.812839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:20.812894 systemd[1]: kubelet.service: Consumed 1.892s CPU time, 130.2M memory peak, 0B memory swap peak. Feb 14 01:47:20.823038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 01:47:20.924401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 01:47:20.928263 (kubelet)[4120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 01:47:20.957997 kubelet[4120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:47:20.957997 kubelet[4120]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 01:47:20.957997 kubelet[4120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 01:47:20.958218 kubelet[4120]: I0214 01:47:20.958051 4120 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 01:47:20.962898 kubelet[4120]: I0214 01:47:20.962880 4120 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 14 01:47:20.962898 kubelet[4120]: I0214 01:47:20.962899 4120 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 01:47:20.964117 kubelet[4120]: I0214 01:47:20.964098 4120 server.go:929] "Client rotation is on, will bootstrap in background" Feb 14 01:47:20.965348 kubelet[4120]: I0214 01:47:20.965334 4120 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 01:47:20.967254 kubelet[4120]: I0214 01:47:20.967235 4120 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 01:47:20.969569 kubelet[4120]: E0214 01:47:20.969545 4120 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 01:47:20.969569 kubelet[4120]: I0214 01:47:20.969566 4120 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 01:47:20.987774 kubelet[4120]: I0214 01:47:20.987742 4120 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 01:47:20.987871 kubelet[4120]: I0214 01:47:20.987860 4120 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 01:47:20.987981 kubelet[4120]: I0214 01:47:20.987958 4120 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 01:47:20.988128 kubelet[4120]: I0214 01:47:20.987981 4120 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-385c1ddb28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 01:47:20.988200 kubelet[4120]: I0214 01:47:20.988137 4120 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 01:47:20.988200 kubelet[4120]: I0214 01:47:20.988146 4120 container_manager_linux.go:300] "Creating device plugin manager" Feb 14 01:47:20.988200 kubelet[4120]: I0214 01:47:20.988174 4120 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:47:20.988273 kubelet[4120]: I0214 01:47:20.988265 4120 kubelet.go:408] "Attempting to sync node with API server" Feb 14 01:47:20.988293 kubelet[4120]: I0214 01:47:20.988278 4120 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 01:47:20.988314 kubelet[4120]: I0214 01:47:20.988298 4120 kubelet.go:314] "Adding apiserver pod source" Feb 14 01:47:20.988314 kubelet[4120]: I0214 01:47:20.988308 4120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 01:47:20.989790 kubelet[4120]: I0214 01:47:20.989258 4120 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 01:47:20.989790 kubelet[4120]: I0214 01:47:20.989698 4120 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 01:47:20.990099 kubelet[4120]: I0214 01:47:20.990085 4120 server.go:1269] "Started kubelet" Feb 14 01:47:20.990224 kubelet[4120]: I0214 01:47:20.990193 4120 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 01:47:20.990402 kubelet[4120]: I0214 01:47:20.990364 4120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 01:47:20.990596 kubelet[4120]: I0214 01:47:20.990583 4120 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 01:47:20.991562 kubelet[4120]: I0214 01:47:20.991547 4120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 01:47:20.991652 kubelet[4120]: I0214 01:47:20.991625 4120 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 01:47:20.992061 kubelet[4120]: I0214 01:47:20.991655 4120 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 01:47:20.992332 kubelet[4120]: E0214 01:47:20.991635 4120 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-385c1ddb28\" not found" Feb 14 01:47:20.992357 kubelet[4120]: I0214 01:47:20.991643 4120 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 01:47:20.992625 kubelet[4120]: I0214 01:47:20.992607 4120 reconciler.go:26] "Reconciler: start to sync state" Feb 14 01:47:20.992817 kubelet[4120]: E0214 01:47:20.992787 4120 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 01:47:20.993530 kubelet[4120]: I0214 01:47:20.993510 4120 factory.go:221] Registration of the systemd container factory successfully Feb 14 01:47:20.993642 kubelet[4120]: I0214 01:47:20.993619 4120 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 01:47:20.993872 kubelet[4120]: I0214 01:47:20.993855 4120 server.go:460] "Adding debug handlers to kubelet server" Feb 14 01:47:20.994376 kubelet[4120]: I0214 01:47:20.994358 4120 factory.go:221] Registration of the containerd container factory successfully Feb 14 01:47:21.000450 kubelet[4120]: I0214 01:47:21.000278 4120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 01:47:21.001610 kubelet[4120]: I0214 01:47:21.001590 4120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 01:47:21.001636 kubelet[4120]: I0214 01:47:21.001616 4120 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 01:47:21.001636 kubelet[4120]: I0214 01:47:21.001635 4120 kubelet.go:2321] "Starting kubelet main sync loop" Feb 14 01:47:21.001699 kubelet[4120]: E0214 01:47:21.001680 4120 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 01:47:21.024949 kubelet[4120]: I0214 01:47:21.024930 4120 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 01:47:21.024949 kubelet[4120]: I0214 01:47:21.024945 4120 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 01:47:21.024994 kubelet[4120]: I0214 01:47:21.024962 4120 state_mem.go:36] "Initialized new in-memory state store" Feb 14 01:47:21.025108 kubelet[4120]: I0214 01:47:21.025095 4120 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 01:47:21.025131 kubelet[4120]: I0214 01:47:21.025107 4120 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 01:47:21.025131 kubelet[4120]: I0214 01:47:21.025125 4120 policy_none.go:49] "None policy: Start" Feb 14 01:47:21.025577 kubelet[4120]: I0214 01:47:21.025561 4120 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 01:47:21.025597 kubelet[4120]: I0214 01:47:21.025584 4120 state_mem.go:35] "Initializing new in-memory state store" Feb 14 01:47:21.025784 kubelet[4120]: I0214 01:47:21.025772 4120 state_mem.go:75] "Updated machine memory state" Feb 14 01:47:21.029690 kubelet[4120]: I0214 01:47:21.029667 4120 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 01:47:21.029979 kubelet[4120]: I0214 01:47:21.029966 4120 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 01:47:21.030012 kubelet[4120]: I0214 01:47:21.029978 4120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 01:47:21.030137 kubelet[4120]: I0214 01:47:21.030123 4120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 01:47:21.118571 kubelet[4120]: W0214 01:47:21.118547 4120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:47:21.118628 kubelet[4120]: W0214 01:47:21.118606 4120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:47:21.118676 kubelet[4120]: W0214 01:47:21.118643 4120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:47:21.133420 kubelet[4120]: I0214 01:47:21.133398 4120 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.137200 kubelet[4120]: I0214 01:47:21.137178 4120 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.137259 kubelet[4120]: I0214 01:47:21.137245 4120 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194587 kubelet[4120]: I0214 01:47:21.194564 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194636 kubelet[4120]: I0214 01:47:21.194592 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194636 kubelet[4120]: I0214 01:47:21.194615 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73782956d00c4d7adfb741fcee708c2f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" (UID: \"73782956d00c4d7adfb741fcee708c2f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194716 kubelet[4120]: I0214 01:47:21.194649 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194716 kubelet[4120]: I0214 01:47:21.194674 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73782956d00c4d7adfb741fcee708c2f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" (UID: \"73782956d00c4d7adfb741fcee708c2f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194716 kubelet[4120]: I0214 01:47:21.194692 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194716 kubelet[4120]: I0214 01:47:21.194714 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07220b1d1f253e7a5f2669b4959a7fd7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-385c1ddb28\" (UID: \"07220b1d1f253e7a5f2669b4959a7fd7\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194908 kubelet[4120]: I0214 01:47:21.194730 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68dd272c3f8f4acc26abaf13f608c6f4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-385c1ddb28\" (UID: \"68dd272c3f8f4acc26abaf13f608c6f4\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.194908 kubelet[4120]: I0214 01:47:21.194744 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73782956d00c4d7adfb741fcee708c2f-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" (UID: \"73782956d00c4d7adfb741fcee708c2f\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:21.988742 kubelet[4120]: I0214 01:47:21.988719 4120 apiserver.go:52] "Watching apiserver" Feb 14 01:47:21.992894 kubelet[4120]: I0214 01:47:21.992870 4120 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 01:47:22.011690 kubelet[4120]: W0214 01:47:22.011667 4120 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 01:47:22.011754 kubelet[4120]: E0214 01:47:22.011723 4120 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-a-385c1ddb28\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:22.023438 kubelet[4120]: I0214 01:47:22.023396 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-385c1ddb28" podStartSLOduration=1.023371093 podStartE2EDuration="1.023371093s" podCreationTimestamp="2025-02-14 01:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:47:22.023156293 +0000 UTC m=+1.091928361" watchObservedRunningTime="2025-02-14 01:47:22.023371093 +0000 UTC m=+1.092143081" Feb 14 01:47:22.032868 kubelet[4120]: I0214 01:47:22.032830 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-385c1ddb28" podStartSLOduration=1.032817773 podStartE2EDuration="1.032817773s" podCreationTimestamp="2025-02-14 01:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:47:22.028325733 +0000 UTC m=+1.097097721" watchObservedRunningTime="2025-02-14 01:47:22.032817773 +0000 UTC m=+1.101589761" Feb 14 01:47:22.038613 kubelet[4120]: I0214 01:47:22.038582 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-385c1ddb28" podStartSLOduration=1.038573453 podStartE2EDuration="1.038573453s" podCreationTimestamp="2025-02-14 01:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:47:22.032786813 +0000 UTC m=+1.101558801" watchObservedRunningTime="2025-02-14 01:47:22.038573453 +0000 UTC m=+1.107345441" Feb 14 01:47:25.443583 sudo[2950]: pam_unix(sudo:session): session closed for user root Feb 14 01:47:25.509383 sshd[2947]: pam_unix(sshd:session): session closed for user core Feb 14 01:47:25.512329 systemd[1]: sshd@6-147.75.62.106:22-139.178.68.195:38366.service: Deactivated successfully. Feb 14 01:47:25.514012 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 01:47:25.514229 systemd[1]: session-9.scope: Consumed 6.445s CPU time, 167.3M memory peak, 0B memory swap peak. Feb 14 01:47:25.514603 systemd-logind[2681]: Session 9 logged out. Waiting for processes to exit. Feb 14 01:47:25.515217 systemd-logind[2681]: Removed session 9. Feb 14 01:47:26.271098 kubelet[4120]: I0214 01:47:26.271037 4120 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 01:47:26.271521 kubelet[4120]: I0214 01:47:26.271472 4120 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 01:47:26.271549 containerd[2699]: time="2025-02-14T01:47:26.271337133Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 01:47:27.133002 systemd[1]: Created slice kubepods-besteffort-pod3f4797bf_cb39_409f_83cb_5dd4e5a1a6ba.slice - libcontainer container kubepods-besteffort-pod3f4797bf_cb39_409f_83cb_5dd4e5a1a6ba.slice. Feb 14 01:47:27.227815 kubelet[4120]: I0214 01:47:27.227783 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba-xtables-lock\") pod \"kube-proxy-c2g27\" (UID: \"3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba\") " pod="kube-system/kube-proxy-c2g27" Feb 14 01:47:27.227913 kubelet[4120]: I0214 01:47:27.227861 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj678\" (UniqueName: \"kubernetes.io/projected/3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba-kube-api-access-gj678\") pod \"kube-proxy-c2g27\" (UID: \"3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba\") " pod="kube-system/kube-proxy-c2g27" Feb 14 01:47:27.227984 kubelet[4120]: I0214 01:47:27.227939 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba-kube-proxy\") pod \"kube-proxy-c2g27\" (UID: \"3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba\") " pod="kube-system/kube-proxy-c2g27" Feb 14 01:47:27.228058 kubelet[4120]: I0214 01:47:27.227989 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba-lib-modules\") pod \"kube-proxy-c2g27\" (UID: \"3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba\") " pod="kube-system/kube-proxy-c2g27" Feb 14 01:47:27.342768 systemd[1]: Created slice kubepods-besteffort-pod5eb3fbb5_7fe8_4a7f_8956_a621a1fe858d.slice - libcontainer container kubepods-besteffort-pod5eb3fbb5_7fe8_4a7f_8956_a621a1fe858d.slice. Feb 14 01:47:27.429509 kubelet[4120]: I0214 01:47:27.429413 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mkwn\" (UniqueName: \"kubernetes.io/projected/5eb3fbb5-7fe8-4a7f-8956-a621a1fe858d-kube-api-access-9mkwn\") pod \"tigera-operator-76c4976dd7-8vb49\" (UID: \"5eb3fbb5-7fe8-4a7f-8956-a621a1fe858d\") " pod="tigera-operator/tigera-operator-76c4976dd7-8vb49" Feb 14 01:47:27.429509 kubelet[4120]: I0214 01:47:27.429448 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5eb3fbb5-7fe8-4a7f-8956-a621a1fe858d-var-lib-calico\") pod \"tigera-operator-76c4976dd7-8vb49\" (UID: \"5eb3fbb5-7fe8-4a7f-8956-a621a1fe858d\") " pod="tigera-operator/tigera-operator-76c4976dd7-8vb49" Feb 14 01:47:27.450980 containerd[2699]: time="2025-02-14T01:47:27.450940813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2g27,Uid:3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba,Namespace:kube-system,Attempt:0,}" Feb 14 01:47:27.463209 containerd[2699]: time="2025-02-14T01:47:27.463129653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:27.463209 containerd[2699]: time="2025-02-14T01:47:27.463201133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:27.463255 containerd[2699]: time="2025-02-14T01:47:27.463212933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:27.463303 containerd[2699]: time="2025-02-14T01:47:27.463284093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:27.485872 systemd[1]: Started cri-containerd-c0e2a07046eaf74fd28a7278f701c16855f7bf45fb56eda589647d8ef45fe668.scope - libcontainer container c0e2a07046eaf74fd28a7278f701c16855f7bf45fb56eda589647d8ef45fe668. Feb 14 01:47:27.501020 containerd[2699]: time="2025-02-14T01:47:27.500987733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2g27,Uid:3f4797bf-cb39-409f-83cb-5dd4e5a1a6ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0e2a07046eaf74fd28a7278f701c16855f7bf45fb56eda589647d8ef45fe668\"" Feb 14 01:47:27.502936 containerd[2699]: time="2025-02-14T01:47:27.502912853Z" level=info msg="CreateContainer within sandbox \"c0e2a07046eaf74fd28a7278f701c16855f7bf45fb56eda589647d8ef45fe668\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 01:47:27.510734 containerd[2699]: time="2025-02-14T01:47:27.510702333Z" level=info msg="CreateContainer within sandbox \"c0e2a07046eaf74fd28a7278f701c16855f7bf45fb56eda589647d8ef45fe668\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"051ad70c7c7e2b5f86643e50ba95eafb06f76eec8e2e27abd83e24703d0d4d91\"" Feb 14 01:47:27.511155 containerd[2699]: time="2025-02-14T01:47:27.511127933Z" level=info msg="StartContainer for \"051ad70c7c7e2b5f86643e50ba95eafb06f76eec8e2e27abd83e24703d0d4d91\"" Feb 14 01:47:27.536928 systemd[1]: Started cri-containerd-051ad70c7c7e2b5f86643e50ba95eafb06f76eec8e2e27abd83e24703d0d4d91.scope - libcontainer container 051ad70c7c7e2b5f86643e50ba95eafb06f76eec8e2e27abd83e24703d0d4d91. Feb 14 01:47:27.555563 containerd[2699]: time="2025-02-14T01:47:27.555534813Z" level=info msg="StartContainer for \"051ad70c7c7e2b5f86643e50ba95eafb06f76eec8e2e27abd83e24703d0d4d91\" returns successfully" Feb 14 01:47:27.644822 containerd[2699]: time="2025-02-14T01:47:27.644725933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-8vb49,Uid:5eb3fbb5-7fe8-4a7f-8956-a621a1fe858d,Namespace:tigera-operator,Attempt:0,}" Feb 14 01:47:27.657493 containerd[2699]: time="2025-02-14T01:47:27.657426773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:27.657534 containerd[2699]: time="2025-02-14T01:47:27.657488453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:27.657534 containerd[2699]: time="2025-02-14T01:47:27.657500813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:27.657603 containerd[2699]: time="2025-02-14T01:47:27.657585453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:27.674940 systemd[1]: Started cri-containerd-a08f04193270e53e0ffc45b5d3269c1a2823f319ff7b4d3d35dbad9d39e2d02a.scope - libcontainer container a08f04193270e53e0ffc45b5d3269c1a2823f319ff7b4d3d35dbad9d39e2d02a. Feb 14 01:47:27.697466 containerd[2699]: time="2025-02-14T01:47:27.697390133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-8vb49,Uid:5eb3fbb5-7fe8-4a7f-8956-a621a1fe858d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a08f04193270e53e0ffc45b5d3269c1a2823f319ff7b4d3d35dbad9d39e2d02a\"" Feb 14 01:47:27.698630 containerd[2699]: time="2025-02-14T01:47:27.698597933Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 14 01:47:28.023814 kubelet[4120]: I0214 01:47:28.023662 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c2g27" podStartSLOduration=1.023645573 podStartE2EDuration="1.023645573s" podCreationTimestamp="2025-02-14 01:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:47:28.023397173 +0000 UTC m=+7.092169161" watchObservedRunningTime="2025-02-14 01:47:28.023645573 +0000 UTC m=+7.092417561" Feb 14 01:47:30.349244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514370787.mount: Deactivated successfully. Feb 14 01:47:30.531334 containerd[2699]: time="2025-02-14T01:47:30.531282333Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:30.531728 containerd[2699]: time="2025-02-14T01:47:30.531293693Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 14 01:47:30.532148 containerd[2699]: time="2025-02-14T01:47:30.532130133Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:30.534280 containerd[2699]: time="2025-02-14T01:47:30.534256333Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:30.534922 containerd[2699]: time="2025-02-14T01:47:30.534899013Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.8362722s" Feb 14 01:47:30.534949 containerd[2699]: time="2025-02-14T01:47:30.534929253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 14 01:47:30.536528 containerd[2699]: time="2025-02-14T01:47:30.536504933Z" level=info msg="CreateContainer within sandbox \"a08f04193270e53e0ffc45b5d3269c1a2823f319ff7b4d3d35dbad9d39e2d02a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 14 01:47:30.541358 containerd[2699]: time="2025-02-14T01:47:30.541333653Z" level=info msg="CreateContainer within sandbox \"a08f04193270e53e0ffc45b5d3269c1a2823f319ff7b4d3d35dbad9d39e2d02a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"72e64d1933cd8725281ee3a2cfa7d3e09cc2893c6f60d959de4331cbaa9e1219\"" Feb 14 01:47:30.541698 containerd[2699]: time="2025-02-14T01:47:30.541673573Z" level=info msg="StartContainer for \"72e64d1933cd8725281ee3a2cfa7d3e09cc2893c6f60d959de4331cbaa9e1219\"" Feb 14 01:47:30.566917 systemd[1]: Started cri-containerd-72e64d1933cd8725281ee3a2cfa7d3e09cc2893c6f60d959de4331cbaa9e1219.scope - libcontainer container 72e64d1933cd8725281ee3a2cfa7d3e09cc2893c6f60d959de4331cbaa9e1219. Feb 14 01:47:30.583047 containerd[2699]: time="2025-02-14T01:47:30.583014333Z" level=info msg="StartContainer for \"72e64d1933cd8725281ee3a2cfa7d3e09cc2893c6f60d959de4331cbaa9e1219\" returns successfully" Feb 14 01:47:31.025871 kubelet[4120]: I0214 01:47:31.025820 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-8vb49" podStartSLOduration=1.188563413 podStartE2EDuration="4.025806493s" podCreationTimestamp="2025-02-14 01:47:27 +0000 UTC" firstStartedPulling="2025-02-14 01:47:27.698244933 +0000 UTC m=+6.767016921" lastFinishedPulling="2025-02-14 01:47:30.535488013 +0000 UTC m=+9.604260001" observedRunningTime="2025-02-14 01:47:31.025678573 +0000 UTC m=+10.094450561" watchObservedRunningTime="2025-02-14 01:47:31.025806493 +0000 UTC m=+10.094578481" Feb 14 01:47:32.585198 update_engine[2691]: I20250214 01:47:32.585075 2691 update_attempter.cc:509] Updating boot flags... Feb 14 01:47:32.628763 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (4739) Feb 14 01:47:32.657761 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (4741) Feb 14 01:47:32.685761 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (4741) Feb 14 01:47:34.838292 systemd[1]: Created slice kubepods-besteffort-pod2bd779b6_a102_4834_831a_172f6d5ba151.slice - libcontainer container kubepods-besteffort-pod2bd779b6_a102_4834_831a_172f6d5ba151.slice. Feb 14 01:47:34.871007 kubelet[4120]: I0214 01:47:34.870905 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd779b6-a102-4834-831a-172f6d5ba151-tigera-ca-bundle\") pod \"calico-typha-5d84f99b5c-pgw2g\" (UID: \"2bd779b6-a102-4834-831a-172f6d5ba151\") " pod="calico-system/calico-typha-5d84f99b5c-pgw2g" Feb 14 01:47:34.871007 kubelet[4120]: I0214 01:47:34.870945 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2bd779b6-a102-4834-831a-172f6d5ba151-typha-certs\") pod \"calico-typha-5d84f99b5c-pgw2g\" (UID: \"2bd779b6-a102-4834-831a-172f6d5ba151\") " pod="calico-system/calico-typha-5d84f99b5c-pgw2g" Feb 14 01:47:34.871007 kubelet[4120]: I0214 01:47:34.870963 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6snfb\" (UniqueName: \"kubernetes.io/projected/2bd779b6-a102-4834-831a-172f6d5ba151-kube-api-access-6snfb\") pod \"calico-typha-5d84f99b5c-pgw2g\" (UID: \"2bd779b6-a102-4834-831a-172f6d5ba151\") " pod="calico-system/calico-typha-5d84f99b5c-pgw2g" Feb 14 01:47:35.015037 systemd[1]: Created slice kubepods-besteffort-pod8f953ea6_404d_4776_8180_babd2d50eff9.slice - libcontainer container kubepods-besteffort-pod8f953ea6_404d_4776_8180_babd2d50eff9.slice. Feb 14 01:47:35.072507 kubelet[4120]: I0214 01:47:35.072405 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-var-lib-calico\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072507 kubelet[4120]: I0214 01:47:35.072437 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-cni-log-dir\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072507 kubelet[4120]: I0214 01:47:35.072455 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-cni-bin-dir\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072507 kubelet[4120]: I0214 01:47:35.072470 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f953ea6-404d-4776-8180-babd2d50eff9-node-certs\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072770 kubelet[4120]: I0214 01:47:35.072567 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-cni-net-dir\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072770 kubelet[4120]: I0214 01:47:35.072628 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwms6\" (UniqueName: \"kubernetes.io/projected/8f953ea6-404d-4776-8180-babd2d50eff9-kube-api-access-pwms6\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072770 kubelet[4120]: I0214 01:47:35.072665 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-xtables-lock\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072770 kubelet[4120]: I0214 01:47:35.072689 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-policysync\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072770 kubelet[4120]: I0214 01:47:35.072707 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f953ea6-404d-4776-8180-babd2d50eff9-tigera-ca-bundle\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072882 kubelet[4120]: I0214 01:47:35.072722 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-flexvol-driver-host\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072882 kubelet[4120]: I0214 01:47:35.072738 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-lib-modules\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.072882 kubelet[4120]: I0214 01:47:35.072759 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f953ea6-404d-4776-8180-babd2d50eff9-var-run-calico\") pod \"calico-node-jcv5f\" (UID: \"8f953ea6-404d-4776-8180-babd2d50eff9\") " pod="calico-system/calico-node-jcv5f" Feb 14 01:47:35.141170 containerd[2699]: time="2025-02-14T01:47:35.141082733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d84f99b5c-pgw2g,Uid:2bd779b6-a102-4834-831a-172f6d5ba151,Namespace:calico-system,Attempt:0,}" Feb 14 01:47:35.154702 containerd[2699]: time="2025-02-14T01:47:35.154632117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:35.154702 containerd[2699]: time="2025-02-14T01:47:35.154690276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:35.154702 containerd[2699]: time="2025-02-14T01:47:35.154700996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:35.154797 containerd[2699]: time="2025-02-14T01:47:35.154784276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:35.174820 kubelet[4120]: E0214 01:47:35.174798 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.174820 kubelet[4120]: W0214 01:47:35.174817 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.174930 kubelet[4120]: E0214 01:47:35.174849 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.174861 systemd[1]: Started cri-containerd-6c233a18891feae8e615b59a337ea05edb1cf8ace8b3f7fe50752e12e1568a8f.scope - libcontainer container 6c233a18891feae8e615b59a337ea05edb1cf8ace8b3f7fe50752e12e1568a8f. Feb 14 01:47:35.175903 kubelet[4120]: E0214 01:47:35.175842 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.175903 kubelet[4120]: W0214 01:47:35.175859 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.175903 kubelet[4120]: E0214 01:47:35.175872 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.182636 kubelet[4120]: E0214 01:47:35.182593 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.182636 kubelet[4120]: W0214 01:47:35.182608 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.182636 kubelet[4120]: E0214 01:47:35.182621 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.196893 containerd[2699]: time="2025-02-14T01:47:35.196864697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d84f99b5c-pgw2g,Uid:2bd779b6-a102-4834-831a-172f6d5ba151,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c233a18891feae8e615b59a337ea05edb1cf8ace8b3f7fe50752e12e1568a8f\"" Feb 14 01:47:35.197969 containerd[2699]: time="2025-02-14T01:47:35.197952249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 14 01:47:35.209860 kubelet[4120]: E0214 01:47:35.209829 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bf5gn" podUID="368a98f4-3c61-48c7-a03d-61e5961b1cc9" Feb 14 01:47:35.264403 kubelet[4120]: E0214 01:47:35.264385 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.264403 kubelet[4120]: W0214 01:47:35.264401 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.264502 kubelet[4120]: E0214 01:47:35.264415 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.264659 kubelet[4120]: E0214 01:47:35.264650 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.264659 kubelet[4120]: W0214 01:47:35.264658 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.264723 kubelet[4120]: E0214 01:47:35.264666 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.264891 kubelet[4120]: E0214 01:47:35.264881 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.264891 kubelet[4120]: W0214 01:47:35.264890 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.264954 kubelet[4120]: E0214 01:47:35.264897 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.265045 kubelet[4120]: E0214 01:47:35.265035 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.265045 kubelet[4120]: W0214 01:47:35.265043 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.265104 kubelet[4120]: E0214 01:47:35.265050 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.265227 kubelet[4120]: E0214 01:47:35.265219 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.265227 kubelet[4120]: W0214 01:47:35.265226 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.265273 kubelet[4120]: E0214 01:47:35.265234 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.265404 kubelet[4120]: E0214 01:47:35.265396 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.265404 kubelet[4120]: W0214 01:47:35.265403 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.265460 kubelet[4120]: E0214 01:47:35.265410 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.265573 kubelet[4120]: E0214 01:47:35.265566 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.265573 kubelet[4120]: W0214 01:47:35.265573 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.265620 kubelet[4120]: E0214 01:47:35.265580 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.265767 kubelet[4120]: E0214 01:47:35.265759 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.265767 kubelet[4120]: W0214 01:47:35.265767 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.265827 kubelet[4120]: E0214 01:47:35.265775 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.265933 kubelet[4120]: E0214 01:47:35.265922 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.265933 kubelet[4120]: W0214 01:47:35.265931 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.265986 kubelet[4120]: E0214 01:47:35.265938 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.266074 kubelet[4120]: E0214 01:47:35.266066 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.266074 kubelet[4120]: W0214 01:47:35.266074 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.266074 kubelet[4120]: E0214 01:47:35.266081 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.266230 kubelet[4120]: E0214 01:47:35.266222 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.266230 kubelet[4120]: W0214 01:47:35.266228 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.266277 kubelet[4120]: E0214 01:47:35.266234 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.266412 kubelet[4120]: E0214 01:47:35.266403 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.266412 kubelet[4120]: W0214 01:47:35.266411 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.266463 kubelet[4120]: E0214 01:47:35.266418 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.266596 kubelet[4120]: E0214 01:47:35.266587 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.266596 kubelet[4120]: W0214 01:47:35.266595 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.266647 kubelet[4120]: E0214 01:47:35.266602 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.266794 kubelet[4120]: E0214 01:47:35.266785 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.266794 kubelet[4120]: W0214 01:47:35.266793 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.266847 kubelet[4120]: E0214 01:47:35.266800 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.266944 kubelet[4120]: E0214 01:47:35.266934 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.266944 kubelet[4120]: W0214 01:47:35.266941 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.266995 kubelet[4120]: E0214 01:47:35.266948 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.267148 kubelet[4120]: E0214 01:47:35.267141 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.267148 kubelet[4120]: W0214 01:47:35.267148 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.267194 kubelet[4120]: E0214 01:47:35.267154 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.267348 kubelet[4120]: E0214 01:47:35.267340 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.267348 kubelet[4120]: W0214 01:47:35.267347 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.267398 kubelet[4120]: E0214 01:47:35.267353 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.267531 kubelet[4120]: E0214 01:47:35.267524 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.267531 kubelet[4120]: W0214 01:47:35.267530 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.267574 kubelet[4120]: E0214 01:47:35.267536 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.267736 kubelet[4120]: E0214 01:47:35.267729 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.267762 kubelet[4120]: W0214 01:47:35.267736 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.267762 kubelet[4120]: E0214 01:47:35.267742 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.267933 kubelet[4120]: E0214 01:47:35.267924 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.267933 kubelet[4120]: W0214 01:47:35.267932 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.267980 kubelet[4120]: E0214 01:47:35.267939 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.274179 kubelet[4120]: E0214 01:47:35.274165 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.274211 kubelet[4120]: W0214 01:47:35.274181 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.274211 kubelet[4120]: E0214 01:47:35.274194 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.274252 kubelet[4120]: I0214 01:47:35.274218 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/368a98f4-3c61-48c7-a03d-61e5961b1cc9-registration-dir\") pod \"csi-node-driver-bf5gn\" (UID: \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\") " pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:35.274392 kubelet[4120]: E0214 01:47:35.274381 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.274414 kubelet[4120]: W0214 01:47:35.274391 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.274414 kubelet[4120]: E0214 01:47:35.274403 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.274456 kubelet[4120]: I0214 01:47:35.274417 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/368a98f4-3c61-48c7-a03d-61e5961b1cc9-socket-dir\") pod \"csi-node-driver-bf5gn\" (UID: \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\") " pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:35.274575 kubelet[4120]: E0214 01:47:35.274566 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.274595 kubelet[4120]: W0214 01:47:35.274575 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.274595 kubelet[4120]: E0214 01:47:35.274586 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.274632 kubelet[4120]: I0214 01:47:35.274603 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/368a98f4-3c61-48c7-a03d-61e5961b1cc9-varrun\") pod \"csi-node-driver-bf5gn\" (UID: \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\") " pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:35.274770 kubelet[4120]: E0214 01:47:35.274760 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.274770 kubelet[4120]: W0214 01:47:35.274769 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.274822 kubelet[4120]: E0214 01:47:35.274780 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.274822 kubelet[4120]: I0214 01:47:35.274795 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b767\" (UniqueName: \"kubernetes.io/projected/368a98f4-3c61-48c7-a03d-61e5961b1cc9-kube-api-access-4b767\") pod \"csi-node-driver-bf5gn\" (UID: \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\") " pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:35.275009 kubelet[4120]: E0214 01:47:35.275000 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.275031 kubelet[4120]: W0214 01:47:35.275009 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.275031 kubelet[4120]: E0214 01:47:35.275020 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.275071 kubelet[4120]: I0214 01:47:35.275034 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/368a98f4-3c61-48c7-a03d-61e5961b1cc9-kubelet-dir\") pod \"csi-node-driver-bf5gn\" (UID: \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\") " pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:35.275258 kubelet[4120]: E0214 01:47:35.275249 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.275278 kubelet[4120]: W0214 01:47:35.275258 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.275278 kubelet[4120]: E0214 01:47:35.275271 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.275446 kubelet[4120]: E0214 01:47:35.275439 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.275466 kubelet[4120]: W0214 01:47:35.275446 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.275490 kubelet[4120]: E0214 01:47:35.275469 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.275672 kubelet[4120]: E0214 01:47:35.275664 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.275693 kubelet[4120]: W0214 01:47:35.275671 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.275693 kubelet[4120]: E0214 01:47:35.275689 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.275866 kubelet[4120]: E0214 01:47:35.275858 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.275892 kubelet[4120]: W0214 01:47:35.275867 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.275892 kubelet[4120]: E0214 01:47:35.275886 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.276091 kubelet[4120]: E0214 01:47:35.276083 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.276111 kubelet[4120]: W0214 01:47:35.276091 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.276130 kubelet[4120]: E0214 01:47:35.276109 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.276286 kubelet[4120]: E0214 01:47:35.276279 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.276308 kubelet[4120]: W0214 01:47:35.276286 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.276308 kubelet[4120]: E0214 01:47:35.276304 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.276466 kubelet[4120]: E0214 01:47:35.276458 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.276486 kubelet[4120]: W0214 01:47:35.276466 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.276486 kubelet[4120]: E0214 01:47:35.276474 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.276701 kubelet[4120]: E0214 01:47:35.276693 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.276723 kubelet[4120]: W0214 01:47:35.276701 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.276723 kubelet[4120]: E0214 01:47:35.276708 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.277328 kubelet[4120]: E0214 01:47:35.277245 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.277328 kubelet[4120]: W0214 01:47:35.277273 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.277328 kubelet[4120]: E0214 01:47:35.277290 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.278373 kubelet[4120]: E0214 01:47:35.278336 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.278373 kubelet[4120]: W0214 01:47:35.278361 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.278467 kubelet[4120]: E0214 01:47:35.278438 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.317531 containerd[2699]: time="2025-02-14T01:47:35.317501481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jcv5f,Uid:8f953ea6-404d-4776-8180-babd2d50eff9,Namespace:calico-system,Attempt:0,}" Feb 14 01:47:35.338654 containerd[2699]: time="2025-02-14T01:47:35.338579931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:35.338654 containerd[2699]: time="2025-02-14T01:47:35.338638371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:35.338654 containerd[2699]: time="2025-02-14T01:47:35.338649251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:35.338804 containerd[2699]: time="2025-02-14T01:47:35.338727130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:35.358926 systemd[1]: Started cri-containerd-ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36.scope - libcontainer container ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36. Feb 14 01:47:35.374672 containerd[2699]: time="2025-02-14T01:47:35.374627715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jcv5f,Uid:8f953ea6-404d-4776-8180-babd2d50eff9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\"" Feb 14 01:47:35.375446 kubelet[4120]: E0214 01:47:35.375424 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.375446 kubelet[4120]: W0214 01:47:35.375444 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.375508 kubelet[4120]: E0214 01:47:35.375463 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.375691 kubelet[4120]: E0214 01:47:35.375680 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.375691 kubelet[4120]: W0214 01:47:35.375689 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.375744 kubelet[4120]: E0214 01:47:35.375701 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.375950 kubelet[4120]: E0214 01:47:35.375933 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.375976 kubelet[4120]: W0214 01:47:35.375950 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.375996 kubelet[4120]: E0214 01:47:35.375969 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.376191 kubelet[4120]: E0214 01:47:35.376178 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.376191 kubelet[4120]: W0214 01:47:35.376186 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.376241 kubelet[4120]: E0214 01:47:35.376196 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.376403 kubelet[4120]: E0214 01:47:35.376391 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.376403 kubelet[4120]: W0214 01:47:35.376398 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.376454 kubelet[4120]: E0214 01:47:35.376409 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.376705 kubelet[4120]: E0214 01:47:35.376696 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.376705 kubelet[4120]: W0214 01:47:35.376705 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.376753 kubelet[4120]: E0214 01:47:35.376716 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.376880 kubelet[4120]: E0214 01:47:35.376868 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.376880 kubelet[4120]: W0214 01:47:35.376877 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.376937 kubelet[4120]: E0214 01:47:35.376887 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.377045 kubelet[4120]: E0214 01:47:35.377036 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.377045 kubelet[4120]: W0214 01:47:35.377045 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.377091 kubelet[4120]: E0214 01:47:35.377082 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.377283 kubelet[4120]: E0214 01:47:35.377274 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.377283 kubelet[4120]: W0214 01:47:35.377281 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.377328 kubelet[4120]: E0214 01:47:35.377315 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.377471 kubelet[4120]: E0214 01:47:35.377463 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.377471 kubelet[4120]: W0214 01:47:35.377470 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.377521 kubelet[4120]: E0214 01:47:35.377481 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.377686 kubelet[4120]: E0214 01:47:35.377674 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.377686 kubelet[4120]: W0214 01:47:35.377682 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.377734 kubelet[4120]: E0214 01:47:35.377693 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.377909 kubelet[4120]: E0214 01:47:35.377895 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.377909 kubelet[4120]: W0214 01:47:35.377905 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.377967 kubelet[4120]: E0214 01:47:35.377916 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.378369 kubelet[4120]: E0214 01:47:35.378211 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.378369 kubelet[4120]: W0214 01:47:35.378233 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.378369 kubelet[4120]: E0214 01:47:35.378252 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.378534 kubelet[4120]: E0214 01:47:35.378521 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.378590 kubelet[4120]: W0214 01:47:35.378579 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.378661 kubelet[4120]: E0214 01:47:35.378642 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.378860 kubelet[4120]: E0214 01:47:35.378847 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.378919 kubelet[4120]: W0214 01:47:35.378909 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.378999 kubelet[4120]: E0214 01:47:35.378988 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.379219 kubelet[4120]: E0214 01:47:35.379206 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.379278 kubelet[4120]: W0214 01:47:35.379267 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.379336 kubelet[4120]: E0214 01:47:35.379325 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.379580 kubelet[4120]: E0214 01:47:35.379567 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.379647 kubelet[4120]: W0214 01:47:35.379636 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.379717 kubelet[4120]: E0214 01:47:35.379706 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.379996 kubelet[4120]: E0214 01:47:35.379984 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.380054 kubelet[4120]: W0214 01:47:35.380043 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.380112 kubelet[4120]: E0214 01:47:35.380101 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.380408 kubelet[4120]: E0214 01:47:35.380331 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.380408 kubelet[4120]: W0214 01:47:35.380342 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.380408 kubelet[4120]: E0214 01:47:35.380411 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.380556 kubelet[4120]: E0214 01:47:35.380545 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.380605 kubelet[4120]: W0214 01:47:35.380596 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.380671 kubelet[4120]: E0214 01:47:35.380655 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.380886 kubelet[4120]: E0214 01:47:35.380875 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.380958 kubelet[4120]: W0214 01:47:35.380947 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.381020 kubelet[4120]: E0214 01:47:35.381009 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.381290 kubelet[4120]: E0214 01:47:35.381278 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.381360 kubelet[4120]: W0214 01:47:35.381349 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.381412 kubelet[4120]: E0214 01:47:35.381402 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.381684 kubelet[4120]: E0214 01:47:35.381668 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.381684 kubelet[4120]: W0214 01:47:35.381681 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.381776 kubelet[4120]: E0214 01:47:35.381694 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.381935 kubelet[4120]: E0214 01:47:35.381925 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.381935 kubelet[4120]: W0214 01:47:35.381935 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.381986 kubelet[4120]: E0214 01:47:35.381954 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.382152 kubelet[4120]: E0214 01:47:35.382144 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.382179 kubelet[4120]: W0214 01:47:35.382152 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.382179 kubelet[4120]: E0214 01:47:35.382161 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.390133 kubelet[4120]: E0214 01:47:35.390116 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:35.390133 kubelet[4120]: W0214 01:47:35.390129 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:35.390133 kubelet[4120]: E0214 01:47:35.390142 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:35.934446 containerd[2699]: time="2025-02-14T01:47:35.934395223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:35.934569 containerd[2699]: time="2025-02-14T01:47:35.934473862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 14 01:47:35.935202 containerd[2699]: time="2025-02-14T01:47:35.935179617Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:35.937144 containerd[2699]: time="2025-02-14T01:47:35.937110604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:35.937687 containerd[2699]: time="2025-02-14T01:47:35.937658680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 739.676751ms" Feb 14 01:47:35.937713 containerd[2699]: time="2025-02-14T01:47:35.937691839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 14 01:47:35.938445 containerd[2699]: time="2025-02-14T01:47:35.938420754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 14 01:47:35.943299 containerd[2699]: time="2025-02-14T01:47:35.943267840Z" level=info msg="CreateContainer within sandbox \"6c233a18891feae8e615b59a337ea05edb1cf8ace8b3f7fe50752e12e1568a8f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 14 01:47:35.948423 containerd[2699]: time="2025-02-14T01:47:35.948386764Z" level=info msg="CreateContainer within sandbox \"6c233a18891feae8e615b59a337ea05edb1cf8ace8b3f7fe50752e12e1568a8f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3ff44ea63be82411ffcaf7431109cc2b1e14e7bc40e9abdf3621f2d61b379cfe\"" Feb 14 01:47:35.948794 containerd[2699]: time="2025-02-14T01:47:35.948769801Z" level=info msg="StartContainer for \"3ff44ea63be82411ffcaf7431109cc2b1e14e7bc40e9abdf3621f2d61b379cfe\"" Feb 14 01:47:35.970864 systemd[1]: Started cri-containerd-3ff44ea63be82411ffcaf7431109cc2b1e14e7bc40e9abdf3621f2d61b379cfe.scope - libcontainer container 3ff44ea63be82411ffcaf7431109cc2b1e14e7bc40e9abdf3621f2d61b379cfe. Feb 14 01:47:35.999584 containerd[2699]: time="2025-02-14T01:47:35.999550400Z" level=info msg="StartContainer for \"3ff44ea63be82411ffcaf7431109cc2b1e14e7bc40e9abdf3621f2d61b379cfe\" returns successfully" Feb 14 01:47:36.034134 kubelet[4120]: I0214 01:47:36.034089 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d84f99b5c-pgw2g" podStartSLOduration=1.293462546 podStartE2EDuration="2.03407433s" podCreationTimestamp="2025-02-14 01:47:34 +0000 UTC" firstStartedPulling="2025-02-14 01:47:35.197702651 +0000 UTC m=+14.266474599" lastFinishedPulling="2025-02-14 01:47:35.938314395 +0000 UTC m=+15.007086383" observedRunningTime="2025-02-14 01:47:36.034026051 +0000 UTC m=+15.102798039" watchObservedRunningTime="2025-02-14 01:47:36.03407433 +0000 UTC m=+15.102846318" Feb 14 01:47:36.073728 kubelet[4120]: E0214 01:47:36.073705 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.073728 kubelet[4120]: W0214 01:47:36.073724 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.073877 kubelet[4120]: E0214 01:47:36.073743 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.073998 kubelet[4120]: E0214 01:47:36.073989 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.073998 kubelet[4120]: W0214 01:47:36.073998 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.074038 kubelet[4120]: E0214 01:47:36.074005 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.074243 kubelet[4120]: E0214 01:47:36.074232 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.074243 kubelet[4120]: W0214 01:47:36.074240 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.074283 kubelet[4120]: E0214 01:47:36.074247 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.074486 kubelet[4120]: E0214 01:47:36.074476 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.074486 kubelet[4120]: W0214 01:47:36.074483 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.074525 kubelet[4120]: E0214 01:47:36.074489 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.074738 kubelet[4120]: E0214 01:47:36.074727 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.074738 kubelet[4120]: W0214 01:47:36.074735 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.074782 kubelet[4120]: E0214 01:47:36.074741 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.074969 kubelet[4120]: E0214 01:47:36.074958 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.074969 kubelet[4120]: W0214 01:47:36.074966 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.075014 kubelet[4120]: E0214 01:47:36.074973 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.075189 kubelet[4120]: E0214 01:47:36.075179 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.075189 kubelet[4120]: W0214 01:47:36.075186 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.075229 kubelet[4120]: E0214 01:47:36.075192 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.075386 kubelet[4120]: E0214 01:47:36.075378 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.075407 kubelet[4120]: W0214 01:47:36.075386 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.075407 kubelet[4120]: E0214 01:47:36.075392 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.075587 kubelet[4120]: E0214 01:47:36.075578 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.075587 kubelet[4120]: W0214 01:47:36.075585 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.075640 kubelet[4120]: E0214 01:47:36.075593 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.075776 kubelet[4120]: E0214 01:47:36.075768 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.075776 kubelet[4120]: W0214 01:47:36.075775 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.075813 kubelet[4120]: E0214 01:47:36.075783 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.075932 kubelet[4120]: E0214 01:47:36.075926 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.075953 kubelet[4120]: W0214 01:47:36.075932 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.075953 kubelet[4120]: E0214 01:47:36.075938 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.076125 kubelet[4120]: E0214 01:47:36.076118 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.076125 kubelet[4120]: W0214 01:47:36.076125 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.076162 kubelet[4120]: E0214 01:47:36.076131 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.076332 kubelet[4120]: E0214 01:47:36.076324 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.076354 kubelet[4120]: W0214 01:47:36.076331 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.076354 kubelet[4120]: E0214 01:47:36.076338 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.076534 kubelet[4120]: E0214 01:47:36.076527 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.076554 kubelet[4120]: W0214 01:47:36.076534 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.076554 kubelet[4120]: E0214 01:47:36.076540 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.076710 kubelet[4120]: E0214 01:47:36.076704 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.076730 kubelet[4120]: W0214 01:47:36.076710 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.076730 kubelet[4120]: E0214 01:47:36.076716 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.076927 kubelet[4120]: E0214 01:47:36.076919 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.076947 kubelet[4120]: W0214 01:47:36.076926 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.076947 kubelet[4120]: E0214 01:47:36.076933 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.077119 kubelet[4120]: E0214 01:47:36.077112 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.077119 kubelet[4120]: W0214 01:47:36.077119 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.077165 kubelet[4120]: E0214 01:47:36.077125 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.077309 kubelet[4120]: E0214 01:47:36.077302 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.077309 kubelet[4120]: W0214 01:47:36.077308 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.077349 kubelet[4120]: E0214 01:47:36.077314 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.077503 kubelet[4120]: E0214 01:47:36.077497 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.077525 kubelet[4120]: W0214 01:47:36.077503 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.077525 kubelet[4120]: E0214 01:47:36.077509 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.077690 kubelet[4120]: E0214 01:47:36.077683 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.077711 kubelet[4120]: W0214 01:47:36.077690 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.077711 kubelet[4120]: E0214 01:47:36.077696 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.077926 kubelet[4120]: E0214 01:47:36.077919 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.077946 kubelet[4120]: W0214 01:47:36.077926 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.077946 kubelet[4120]: E0214 01:47:36.077932 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.078128 kubelet[4120]: E0214 01:47:36.078121 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.078152 kubelet[4120]: W0214 01:47:36.078127 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.078152 kubelet[4120]: E0214 01:47:36.078133 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.078320 kubelet[4120]: E0214 01:47:36.078313 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.078342 kubelet[4120]: W0214 01:47:36.078319 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.078342 kubelet[4120]: E0214 01:47:36.078326 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.078501 kubelet[4120]: E0214 01:47:36.078494 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.078523 kubelet[4120]: W0214 01:47:36.078501 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.078523 kubelet[4120]: E0214 01:47:36.078507 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.078697 kubelet[4120]: E0214 01:47:36.078690 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.078717 kubelet[4120]: W0214 01:47:36.078697 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.078717 kubelet[4120]: E0214 01:47:36.078703 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.078905 kubelet[4120]: E0214 01:47:36.078898 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.078929 kubelet[4120]: W0214 01:47:36.078905 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.078929 kubelet[4120]: E0214 01:47:36.078911 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.079100 kubelet[4120]: E0214 01:47:36.079093 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.079120 kubelet[4120]: W0214 01:47:36.079099 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.079120 kubelet[4120]: E0214 01:47:36.079106 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.079282 kubelet[4120]: E0214 01:47:36.079275 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.079302 kubelet[4120]: W0214 01:47:36.079281 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.079302 kubelet[4120]: E0214 01:47:36.079288 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.079461 kubelet[4120]: E0214 01:47:36.079453 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.079481 kubelet[4120]: W0214 01:47:36.079461 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.079481 kubelet[4120]: E0214 01:47:36.079468 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.079640 kubelet[4120]: E0214 01:47:36.079634 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.079660 kubelet[4120]: W0214 01:47:36.079640 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.079660 kubelet[4120]: E0214 01:47:36.079647 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.079845 kubelet[4120]: E0214 01:47:36.079838 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.079865 kubelet[4120]: W0214 01:47:36.079845 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.079865 kubelet[4120]: E0214 01:47:36.079853 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.080045 kubelet[4120]: E0214 01:47:36.080038 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.080068 kubelet[4120]: W0214 01:47:36.080045 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.080068 kubelet[4120]: E0214 01:47:36.080052 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.080239 kubelet[4120]: E0214 01:47:36.080233 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.080259 kubelet[4120]: W0214 01:47:36.080239 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.080259 kubelet[4120]: E0214 01:47:36.080246 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.080420 kubelet[4120]: E0214 01:47:36.080413 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.080440 kubelet[4120]: W0214 01:47:36.080420 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.080440 kubelet[4120]: E0214 01:47:36.080426 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.080623 kubelet[4120]: E0214 01:47:36.080616 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.080643 kubelet[4120]: W0214 01:47:36.080623 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.080643 kubelet[4120]: E0214 01:47:36.080629 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.080770 kubelet[4120]: E0214 01:47:36.080763 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.080790 kubelet[4120]: W0214 01:47:36.080770 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.080790 kubelet[4120]: E0214 01:47:36.080777 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.080946 kubelet[4120]: E0214 01:47:36.080939 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.080966 kubelet[4120]: W0214 01:47:36.080946 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.080966 kubelet[4120]: E0214 01:47:36.080952 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.081087 kubelet[4120]: E0214 01:47:36.081080 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.081107 kubelet[4120]: W0214 01:47:36.081087 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.081107 kubelet[4120]: E0214 01:47:36.081093 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.081221 kubelet[4120]: E0214 01:47:36.081214 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.081241 kubelet[4120]: W0214 01:47:36.081221 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.081241 kubelet[4120]: E0214 01:47:36.081227 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.081349 kubelet[4120]: E0214 01:47:36.081343 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.081369 kubelet[4120]: W0214 01:47:36.081349 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.081369 kubelet[4120]: E0214 01:47:36.081355 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.081581 kubelet[4120]: E0214 01:47:36.081572 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.081601 kubelet[4120]: W0214 01:47:36.081581 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.081601 kubelet[4120]: E0214 01:47:36.081587 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.081738 kubelet[4120]: E0214 01:47:36.081730 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.081762 kubelet[4120]: W0214 01:47:36.081738 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.081762 kubelet[4120]: E0214 01:47:36.081752 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.081949 kubelet[4120]: E0214 01:47:36.081934 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.081969 kubelet[4120]: W0214 01:47:36.081950 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.081989 kubelet[4120]: E0214 01:47:36.081969 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.082151 kubelet[4120]: E0214 01:47:36.082144 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.082171 kubelet[4120]: W0214 01:47:36.082151 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.082171 kubelet[4120]: E0214 01:47:36.082162 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.082300 kubelet[4120]: E0214 01:47:36.082292 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.082320 kubelet[4120]: W0214 01:47:36.082300 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.082320 kubelet[4120]: E0214 01:47:36.082310 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.082461 kubelet[4120]: E0214 01:47:36.082453 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.082481 kubelet[4120]: W0214 01:47:36.082460 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.082481 kubelet[4120]: E0214 01:47:36.082471 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.082709 kubelet[4120]: E0214 01:47:36.082695 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.082732 kubelet[4120]: W0214 01:47:36.082710 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.082732 kubelet[4120]: E0214 01:47:36.082725 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.082905 kubelet[4120]: E0214 01:47:36.082894 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.082926 kubelet[4120]: W0214 01:47:36.082905 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.082926 kubelet[4120]: E0214 01:47:36.082918 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.083110 kubelet[4120]: E0214 01:47:36.083100 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.083110 kubelet[4120]: W0214 01:47:36.083107 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.083165 kubelet[4120]: E0214 01:47:36.083118 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.083260 kubelet[4120]: E0214 01:47:36.083252 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.083260 kubelet[4120]: W0214 01:47:36.083259 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.083301 kubelet[4120]: E0214 01:47:36.083269 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.083403 kubelet[4120]: E0214 01:47:36.083396 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.083423 kubelet[4120]: W0214 01:47:36.083402 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.083423 kubelet[4120]: E0214 01:47:36.083413 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.083561 kubelet[4120]: E0214 01:47:36.083548 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.083581 kubelet[4120]: W0214 01:47:36.083563 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.083581 kubelet[4120]: E0214 01:47:36.083576 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.083762 kubelet[4120]: E0214 01:47:36.083755 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.083783 kubelet[4120]: W0214 01:47:36.083763 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.083783 kubelet[4120]: E0214 01:47:36.083773 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.083904 kubelet[4120]: E0214 01:47:36.083897 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.083924 kubelet[4120]: W0214 01:47:36.083905 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.083924 kubelet[4120]: E0214 01:47:36.083914 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.084054 kubelet[4120]: E0214 01:47:36.084046 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.084075 kubelet[4120]: W0214 01:47:36.084054 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.084075 kubelet[4120]: E0214 01:47:36.084065 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.084220 kubelet[4120]: E0214 01:47:36.084212 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.084239 kubelet[4120]: W0214 01:47:36.084220 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.084239 kubelet[4120]: E0214 01:47:36.084230 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.084438 kubelet[4120]: E0214 01:47:36.084428 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.084458 kubelet[4120]: W0214 01:47:36.084438 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.084458 kubelet[4120]: E0214 01:47:36.084449 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.084597 kubelet[4120]: E0214 01:47:36.084587 4120 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 01:47:36.084616 kubelet[4120]: W0214 01:47:36.084597 4120 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 01:47:36.084616 kubelet[4120]: E0214 01:47:36.084604 4120 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 01:47:36.470816 containerd[2699]: time="2025-02-14T01:47:36.470780985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:36.471165 containerd[2699]: time="2025-02-14T01:47:36.470842584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 14 01:47:36.471553 containerd[2699]: time="2025-02-14T01:47:36.471533460Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:36.473227 containerd[2699]: time="2025-02-14T01:47:36.473206489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:36.473890 containerd[2699]: time="2025-02-14T01:47:36.473867604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 535.41549ms" Feb 14 01:47:36.473929 containerd[2699]: time="2025-02-14T01:47:36.473897484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 14 01:47:36.475473 containerd[2699]: time="2025-02-14T01:47:36.475451034Z" level=info msg="CreateContainer within sandbox \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 14 01:47:36.480867 containerd[2699]: time="2025-02-14T01:47:36.480842958Z" level=info msg="CreateContainer within sandbox \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0\"" Feb 14 01:47:36.481196 containerd[2699]: time="2025-02-14T01:47:36.481173596Z" level=info msg="StartContainer for \"cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0\"" Feb 14 01:47:36.513878 systemd[1]: Started cri-containerd-cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0.scope - libcontainer container cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0. Feb 14 01:47:36.540172 containerd[2699]: time="2025-02-14T01:47:36.540138483Z" level=info msg="StartContainer for \"cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0\" returns successfully" Feb 14 01:47:36.544984 systemd[1]: cri-containerd-cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0.scope: Deactivated successfully. Feb 14 01:47:36.678508 containerd[2699]: time="2025-02-14T01:47:36.678462683Z" level=info msg="shim disconnected" id=cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0 namespace=k8s.io Feb 14 01:47:36.678576 containerd[2699]: time="2025-02-14T01:47:36.678509043Z" level=warning msg="cleaning up after shim disconnected" id=cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0 namespace=k8s.io Feb 14 01:47:36.678576 containerd[2699]: time="2025-02-14T01:47:36.678517523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:47:36.975338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbaa9129b59841fa4b1ab3801c57a6592f31cef0f9cf2ee0a0dc0212fde728d0-rootfs.mount: Deactivated successfully. Feb 14 01:47:37.002456 kubelet[4120]: E0214 01:47:37.002419 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bf5gn" podUID="368a98f4-3c61-48c7-a03d-61e5961b1cc9" Feb 14 01:47:37.030727 containerd[2699]: time="2025-02-14T01:47:37.030696552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 14 01:47:38.527861 containerd[2699]: time="2025-02-14T01:47:38.527822499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:38.528124 containerd[2699]: time="2025-02-14T01:47:38.527889659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 14 01:47:38.528646 containerd[2699]: time="2025-02-14T01:47:38.528626215Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:38.530352 containerd[2699]: time="2025-02-14T01:47:38.530324605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:38.531092 containerd[2699]: time="2025-02-14T01:47:38.531070520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.500337768s" Feb 14 01:47:38.531116 containerd[2699]: time="2025-02-14T01:47:38.531099160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 14 01:47:38.532758 containerd[2699]: time="2025-02-14T01:47:38.532731271Z" level=info msg="CreateContainer within sandbox \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 01:47:38.538517 containerd[2699]: time="2025-02-14T01:47:38.538490877Z" level=info msg="CreateContainer within sandbox \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce\"" Feb 14 01:47:38.538854 containerd[2699]: time="2025-02-14T01:47:38.538838355Z" level=info msg="StartContainer for \"32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce\"" Feb 14 01:47:38.570935 systemd[1]: Started cri-containerd-32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce.scope - libcontainer container 32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce. Feb 14 01:47:38.589444 containerd[2699]: time="2025-02-14T01:47:38.589412379Z" level=info msg="StartContainer for \"32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce\" returns successfully" Feb 14 01:47:38.927399 containerd[2699]: time="2025-02-14T01:47:38.927358243Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 01:47:38.928899 systemd[1]: cri-containerd-32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce.scope: Deactivated successfully. Feb 14 01:47:38.970508 kubelet[4120]: I0214 01:47:38.970483 4120 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 14 01:47:38.988267 systemd[1]: Created slice kubepods-besteffort-pod3ccf9c15_46bf_4f15_bf25_3067d3a65e25.slice - libcontainer container kubepods-besteffort-pod3ccf9c15_46bf_4f15_bf25_3067d3a65e25.slice. Feb 14 01:47:38.991874 systemd[1]: Created slice kubepods-burstable-pod3b0ed4b7_1cef_42d9_9eba_8a303a0a9ff1.slice - libcontainer container kubepods-burstable-pod3b0ed4b7_1cef_42d9_9eba_8a303a0a9ff1.slice. Feb 14 01:47:38.995699 systemd[1]: Created slice kubepods-burstable-pod66e17b36_25ed_486f_b40e_ad1476b372c7.slice - libcontainer container kubepods-burstable-pod66e17b36_25ed_486f_b40e_ad1476b372c7.slice. Feb 14 01:47:38.999453 systemd[1]: Created slice kubepods-besteffort-pod7a7c0e06_d39c_44c9_a08b_42e6e8f22180.slice - libcontainer container kubepods-besteffort-pod7a7c0e06_d39c_44c9_a08b_42e6e8f22180.slice. Feb 14 01:47:39.003207 systemd[1]: Created slice kubepods-besteffort-podd4057f79_38ab_4790_ae6d_39417a81be01.slice - libcontainer container kubepods-besteffort-podd4057f79_38ab_4790_ae6d_39417a81be01.slice. Feb 14 01:47:39.009739 systemd[1]: Created slice kubepods-besteffort-pod368a98f4_3c61_48c7_a03d_61e5961b1cc9.slice - libcontainer container kubepods-besteffort-pod368a98f4_3c61_48c7_a03d_61e5961b1cc9.slice. Feb 14 01:47:39.011308 containerd[2699]: time="2025-02-14T01:47:39.011269196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bf5gn,Uid:368a98f4-3c61-48c7-a03d-61e5961b1cc9,Namespace:calico-system,Attempt:0,}" Feb 14 01:47:39.095688 containerd[2699]: time="2025-02-14T01:47:39.095632454Z" level=info msg="shim disconnected" id=32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce namespace=k8s.io Feb 14 01:47:39.095688 containerd[2699]: time="2025-02-14T01:47:39.095684213Z" level=warning msg="cleaning up after shim disconnected" id=32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce namespace=k8s.io Feb 14 01:47:39.095793 containerd[2699]: time="2025-02-14T01:47:39.095692093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 01:47:39.099555 kubelet[4120]: I0214 01:47:39.099526 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds9q\" (UniqueName: \"kubernetes.io/projected/3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1-kube-api-access-6ds9q\") pod \"coredns-6f6b679f8f-rnmrh\" (UID: \"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1\") " pod="kube-system/coredns-6f6b679f8f-rnmrh" Feb 14 01:47:39.099586 kubelet[4120]: I0214 01:47:39.099566 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzb7x\" (UniqueName: \"kubernetes.io/projected/3ccf9c15-46bf-4f15-bf25-3067d3a65e25-kube-api-access-dzb7x\") pod \"calico-apiserver-5f5c9f9c9f-cnrft\" (UID: \"3ccf9c15-46bf-4f15-bf25-3067d3a65e25\") " pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" Feb 14 01:47:39.099608 kubelet[4120]: I0214 01:47:39.099585 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jfr\" (UniqueName: \"kubernetes.io/projected/7a7c0e06-d39c-44c9-a08b-42e6e8f22180-kube-api-access-28jfr\") pod \"calico-kube-controllers-67c94569b7-pl64x\" (UID: \"7a7c0e06-d39c-44c9-a08b-42e6e8f22180\") " pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" Feb 14 01:47:39.099676 kubelet[4120]: I0214 01:47:39.099649 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ccf9c15-46bf-4f15-bf25-3067d3a65e25-calico-apiserver-certs\") pod \"calico-apiserver-5f5c9f9c9f-cnrft\" (UID: \"3ccf9c15-46bf-4f15-bf25-3067d3a65e25\") " pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" Feb 14 01:47:39.099707 kubelet[4120]: I0214 01:47:39.099685 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7c0e06-d39c-44c9-a08b-42e6e8f22180-tigera-ca-bundle\") pod \"calico-kube-controllers-67c94569b7-pl64x\" (UID: \"7a7c0e06-d39c-44c9-a08b-42e6e8f22180\") " pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" Feb 14 01:47:39.099734 kubelet[4120]: I0214 01:47:39.099708 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htqll\" (UniqueName: \"kubernetes.io/projected/66e17b36-25ed-486f-b40e-ad1476b372c7-kube-api-access-htqll\") pod \"coredns-6f6b679f8f-h88pb\" (UID: \"66e17b36-25ed-486f-b40e-ad1476b372c7\") " pod="kube-system/coredns-6f6b679f8f-h88pb" Feb 14 01:47:39.099734 kubelet[4120]: I0214 01:47:39.099725 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1-config-volume\") pod \"coredns-6f6b679f8f-rnmrh\" (UID: \"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1\") " pod="kube-system/coredns-6f6b679f8f-rnmrh" Feb 14 01:47:39.099781 kubelet[4120]: I0214 01:47:39.099751 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4057f79-38ab-4790-ae6d-39417a81be01-calico-apiserver-certs\") pod \"calico-apiserver-5f5c9f9c9f-2f27r\" (UID: \"d4057f79-38ab-4790-ae6d-39417a81be01\") " pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" Feb 14 01:47:39.099781 kubelet[4120]: I0214 01:47:39.099769 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mfjm\" (UniqueName: \"kubernetes.io/projected/d4057f79-38ab-4790-ae6d-39417a81be01-kube-api-access-2mfjm\") pod \"calico-apiserver-5f5c9f9c9f-2f27r\" (UID: \"d4057f79-38ab-4790-ae6d-39417a81be01\") " pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" Feb 14 01:47:39.099837 kubelet[4120]: I0214 01:47:39.099822 4120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66e17b36-25ed-486f-b40e-ad1476b372c7-config-volume\") pod \"coredns-6f6b679f8f-h88pb\" (UID: \"66e17b36-25ed-486f-b40e-ad1476b372c7\") " pod="kube-system/coredns-6f6b679f8f-h88pb" Feb 14 01:47:39.151904 containerd[2699]: time="2025-02-14T01:47:39.151851945Z" level=error msg="Failed to destroy network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.152223 containerd[2699]: time="2025-02-14T01:47:39.152198863Z" level=error msg="encountered an error cleaning up failed sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.152267 containerd[2699]: time="2025-02-14T01:47:39.152245583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bf5gn,Uid:368a98f4-3c61-48c7-a03d-61e5961b1cc9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.152501 kubelet[4120]: E0214 01:47:39.152454 4120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.152557 kubelet[4120]: E0214 01:47:39.152541 4120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:39.152583 kubelet[4120]: E0214 01:47:39.152561 4120 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bf5gn" Feb 14 01:47:39.152625 kubelet[4120]: E0214 01:47:39.152602 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bf5gn_calico-system(368a98f4-3c61-48c7-a03d-61e5961b1cc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bf5gn_calico-system(368a98f4-3c61-48c7-a03d-61e5961b1cc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bf5gn" podUID="368a98f4-3c61-48c7-a03d-61e5961b1cc9" Feb 14 01:47:39.291267 containerd[2699]: time="2025-02-14T01:47:39.291188222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-cnrft,Uid:3ccf9c15-46bf-4f15-bf25-3067d3a65e25,Namespace:calico-apiserver,Attempt:0,}" Feb 14 01:47:39.294693 containerd[2699]: time="2025-02-14T01:47:39.294663362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnmrh,Uid:3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1,Namespace:kube-system,Attempt:0,}" Feb 14 01:47:39.298155 containerd[2699]: time="2025-02-14T01:47:39.298127623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h88pb,Uid:66e17b36-25ed-486f-b40e-ad1476b372c7,Namespace:kube-system,Attempt:0,}" Feb 14 01:47:39.301677 containerd[2699]: time="2025-02-14T01:47:39.301648844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c94569b7-pl64x,Uid:7a7c0e06-d39c-44c9-a08b-42e6e8f22180,Namespace:calico-system,Attempt:0,}" Feb 14 01:47:39.307250 containerd[2699]: time="2025-02-14T01:47:39.307218494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-2f27r,Uid:d4057f79-38ab-4790-ae6d-39417a81be01,Namespace:calico-apiserver,Attempt:0,}" Feb 14 01:47:39.336407 containerd[2699]: time="2025-02-14T01:47:39.336351334Z" level=error msg="Failed to destroy network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.336713 containerd[2699]: time="2025-02-14T01:47:39.336687932Z" level=error msg="encountered an error cleaning up failed sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.336782 containerd[2699]: time="2025-02-14T01:47:39.336744252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-cnrft,Uid:3ccf9c15-46bf-4f15-bf25-3067d3a65e25,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.336978 kubelet[4120]: E0214 01:47:39.336941 4120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.337024 kubelet[4120]: E0214 01:47:39.337007 4120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" Feb 14 01:47:39.337050 kubelet[4120]: E0214 01:47:39.337027 4120 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" Feb 14 01:47:39.337091 kubelet[4120]: E0214 01:47:39.337068 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5c9f9c9f-cnrft_calico-apiserver(3ccf9c15-46bf-4f15-bf25-3067d3a65e25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5c9f9c9f-cnrft_calico-apiserver(3ccf9c15-46bf-4f15-bf25-3067d3a65e25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" podUID="3ccf9c15-46bf-4f15-bf25-3067d3a65e25" Feb 14 01:47:39.337131 containerd[2699]: time="2025-02-14T01:47:39.337097410Z" level=error msg="Failed to destroy network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.337482 containerd[2699]: time="2025-02-14T01:47:39.337458328Z" level=error msg="encountered an error cleaning up failed sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.337518 containerd[2699]: time="2025-02-14T01:47:39.337502328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnmrh,Uid:3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.337659 kubelet[4120]: E0214 01:47:39.337639 4120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.337686 kubelet[4120]: E0214 01:47:39.337674 4120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnmrh" Feb 14 01:47:39.337707 kubelet[4120]: E0214 01:47:39.337690 4120 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnmrh" Feb 14 01:47:39.337743 kubelet[4120]: E0214 01:47:39.337722 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rnmrh_kube-system(3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rnmrh_kube-system(3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnmrh" podUID="3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1" Feb 14 01:47:39.340277 containerd[2699]: time="2025-02-14T01:47:39.340252393Z" level=error msg="Failed to destroy network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.340546 containerd[2699]: time="2025-02-14T01:47:39.340526151Z" level=error msg="encountered an error cleaning up failed sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.340576 containerd[2699]: time="2025-02-14T01:47:39.340560671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h88pb,Uid:66e17b36-25ed-486f-b40e-ad1476b372c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.340697 kubelet[4120]: E0214 01:47:39.340676 4120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.340722 kubelet[4120]: E0214 01:47:39.340709 4120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-h88pb" Feb 14 01:47:39.340748 kubelet[4120]: E0214 01:47:39.340726 4120 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-h88pb" Feb 14 01:47:39.340774 kubelet[4120]: E0214 01:47:39.340758 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-h88pb_kube-system(66e17b36-25ed-486f-b40e-ad1476b372c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-h88pb_kube-system(66e17b36-25ed-486f-b40e-ad1476b372c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-h88pb" podUID="66e17b36-25ed-486f-b40e-ad1476b372c7" Feb 14 01:47:39.346065 containerd[2699]: time="2025-02-14T01:47:39.346033561Z" level=error msg="Failed to destroy network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.346346 containerd[2699]: time="2025-02-14T01:47:39.346323999Z" level=error msg="encountered an error cleaning up failed sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.346382 containerd[2699]: time="2025-02-14T01:47:39.346364679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c94569b7-pl64x,Uid:7a7c0e06-d39c-44c9-a08b-42e6e8f22180,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.346533 kubelet[4120]: E0214 01:47:39.346508 4120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.346560 kubelet[4120]: E0214 01:47:39.346549 4120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" Feb 14 01:47:39.346583 kubelet[4120]: E0214 01:47:39.346565 4120 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" Feb 14 01:47:39.346621 kubelet[4120]: E0214 01:47:39.346602 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67c94569b7-pl64x_calico-system(7a7c0e06-d39c-44c9-a08b-42e6e8f22180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67c94569b7-pl64x_calico-system(7a7c0e06-d39c-44c9-a08b-42e6e8f22180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" podUID="7a7c0e06-d39c-44c9-a08b-42e6e8f22180" Feb 14 01:47:39.361529 containerd[2699]: time="2025-02-14T01:47:39.361499476Z" level=error msg="Failed to destroy network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.361845 containerd[2699]: time="2025-02-14T01:47:39.361822474Z" level=error msg="encountered an error cleaning up failed sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.361887 containerd[2699]: time="2025-02-14T01:47:39.361870154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-2f27r,Uid:d4057f79-38ab-4790-ae6d-39417a81be01,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.362033 kubelet[4120]: E0214 01:47:39.362006 4120 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:39.362076 kubelet[4120]: E0214 01:47:39.362056 4120 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" Feb 14 01:47:39.362105 kubelet[4120]: E0214 01:47:39.362084 4120 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" Feb 14 01:47:39.362149 kubelet[4120]: E0214 01:47:39.362131 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5c9f9c9f-2f27r_calico-apiserver(d4057f79-38ab-4790-ae6d-39417a81be01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5c9f9c9f-2f27r_calico-apiserver(d4057f79-38ab-4790-ae6d-39417a81be01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" podUID="d4057f79-38ab-4790-ae6d-39417a81be01" Feb 14 01:47:39.546185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e296acd9f53ed94eb5ab089cd2516b211071751e780474ea6f9483a74d9bce-rootfs.mount: Deactivated successfully. Feb 14 01:47:40.037268 kubelet[4120]: I0214 01:47:40.037242 4120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:47:40.037849 containerd[2699]: time="2025-02-14T01:47:40.037743062Z" level=info msg="StopPodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\"" Feb 14 01:47:40.038067 containerd[2699]: time="2025-02-14T01:47:40.037887781Z" level=info msg="Ensure that sandbox 325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e in task-service has been cleanup successfully" Feb 14 01:47:40.038096 kubelet[4120]: I0214 01:47:40.037938 4120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:47:40.038379 containerd[2699]: time="2025-02-14T01:47:40.038342458Z" level=info msg="StopPodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\"" Feb 14 01:47:40.038498 containerd[2699]: time="2025-02-14T01:47:40.038474498Z" level=info msg="Ensure that sandbox 172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe in task-service has been cleanup successfully" Feb 14 01:47:40.038668 kubelet[4120]: I0214 01:47:40.038652 4120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:47:40.039053 containerd[2699]: time="2025-02-14T01:47:40.039024415Z" level=info msg="StopPodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\"" Feb 14 01:47:40.039179 containerd[2699]: time="2025-02-14T01:47:40.039162574Z" level=info msg="Ensure that sandbox bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae in task-service has been cleanup successfully" Feb 14 01:47:40.040729 kubelet[4120]: I0214 01:47:40.040713 4120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:47:40.040825 containerd[2699]: time="2025-02-14T01:47:40.040805286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 14 01:47:40.041085 containerd[2699]: time="2025-02-14T01:47:40.041064804Z" level=info msg="StopPodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\"" Feb 14 01:47:40.041211 containerd[2699]: time="2025-02-14T01:47:40.041196844Z" level=info msg="Ensure that sandbox 51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402 in task-service has been cleanup successfully" Feb 14 01:47:40.041631 kubelet[4120]: I0214 01:47:40.041503 4120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:47:40.041884 containerd[2699]: time="2025-02-14T01:47:40.041864000Z" level=info msg="StopPodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\"" Feb 14 01:47:40.042005 containerd[2699]: time="2025-02-14T01:47:40.041991720Z" level=info msg="Ensure that sandbox 6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3 in task-service has been cleanup successfully" Feb 14 01:47:40.042828 kubelet[4120]: I0214 01:47:40.042809 4120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:47:40.043781 containerd[2699]: time="2025-02-14T01:47:40.043292233Z" level=info msg="StopPodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\"" Feb 14 01:47:40.044833 containerd[2699]: time="2025-02-14T01:47:40.044122149Z" level=info msg="Ensure that sandbox 628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d in task-service has been cleanup successfully" Feb 14 01:47:40.063783 containerd[2699]: time="2025-02-14T01:47:40.063728168Z" level=error msg="StopPodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" failed" error="failed to destroy network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:40.063975 containerd[2699]: time="2025-02-14T01:47:40.063931567Z" level=error msg="StopPodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" failed" error="failed to destroy network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:40.064071 kubelet[4120]: E0214 01:47:40.064039 4120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:47:40.064133 kubelet[4120]: E0214 01:47:40.064092 4120 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe"} Feb 14 01:47:40.064161 kubelet[4120]: E0214 01:47:40.064039 4120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:47:40.064161 kubelet[4120]: E0214 01:47:40.064149 4120 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66e17b36-25ed-486f-b40e-ad1476b372c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:47:40.064242 kubelet[4120]: E0214 01:47:40.064164 4120 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e"} Feb 14 01:47:40.064242 kubelet[4120]: E0214 01:47:40.064169 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66e17b36-25ed-486f-b40e-ad1476b372c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-h88pb" podUID="66e17b36-25ed-486f-b40e-ad1476b372c7" Feb 14 01:47:40.064242 kubelet[4120]: E0214 01:47:40.064193 4120 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a7c0e06-d39c-44c9-a08b-42e6e8f22180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:47:40.064242 kubelet[4120]: E0214 01:47:40.064217 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a7c0e06-d39c-44c9-a08b-42e6e8f22180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" podUID="7a7c0e06-d39c-44c9-a08b-42e6e8f22180" Feb 14 01:47:40.064642 containerd[2699]: time="2025-02-14T01:47:40.064604363Z" level=error msg="StopPodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" failed" error="failed to destroy network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:40.064769 kubelet[4120]: E0214 01:47:40.064739 4120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:47:40.064793 kubelet[4120]: E0214 01:47:40.064777 4120 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae"} Feb 14 01:47:40.064819 kubelet[4120]: E0214 01:47:40.064802 4120 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:47:40.064855 kubelet[4120]: E0214 01:47:40.064822 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnmrh" podUID="3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1" Feb 14 01:47:40.066029 containerd[2699]: time="2025-02-14T01:47:40.065992476Z" level=error msg="StopPodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" failed" error="failed to destroy network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:40.066144 kubelet[4120]: E0214 01:47:40.066124 4120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:47:40.066170 kubelet[4120]: E0214 01:47:40.066148 4120 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3"} Feb 14 01:47:40.066192 kubelet[4120]: E0214 01:47:40.066166 4120 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:47:40.066192 kubelet[4120]: E0214 01:47:40.066182 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"368a98f4-3c61-48c7-a03d-61e5961b1cc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bf5gn" podUID="368a98f4-3c61-48c7-a03d-61e5961b1cc9" Feb 14 01:47:40.066592 containerd[2699]: time="2025-02-14T01:47:40.066564553Z" level=error msg="StopPodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" failed" error="failed to destroy network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:40.066702 kubelet[4120]: E0214 01:47:40.066676 4120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:47:40.066727 kubelet[4120]: E0214 01:47:40.066711 4120 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402"} Feb 14 01:47:40.066751 kubelet[4120]: E0214 01:47:40.066736 4120 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ccf9c15-46bf-4f15-bf25-3067d3a65e25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:47:40.066786 kubelet[4120]: E0214 01:47:40.066758 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ccf9c15-46bf-4f15-bf25-3067d3a65e25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" podUID="3ccf9c15-46bf-4f15-bf25-3067d3a65e25" Feb 14 01:47:40.068069 containerd[2699]: time="2025-02-14T01:47:40.068042546Z" level=error msg="StopPodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" failed" error="failed to destroy network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 01:47:40.068169 kubelet[4120]: E0214 01:47:40.068154 4120 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:47:40.068195 kubelet[4120]: E0214 01:47:40.068173 4120 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d"} Feb 14 01:47:40.068217 kubelet[4120]: E0214 01:47:40.068192 4120 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4057f79-38ab-4790-ae6d-39417a81be01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 01:47:40.068217 kubelet[4120]: E0214 01:47:40.068206 4120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4057f79-38ab-4790-ae6d-39417a81be01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" podUID="d4057f79-38ab-4790-ae6d-39417a81be01" Feb 14 01:47:42.558564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941678877.mount: Deactivated successfully. Feb 14 01:47:42.584526 containerd[2699]: time="2025-02-14T01:47:42.584481577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:42.584792 containerd[2699]: time="2025-02-14T01:47:42.584562337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 14 01:47:42.585209 containerd[2699]: time="2025-02-14T01:47:42.585189294Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:42.586746 containerd[2699]: time="2025-02-14T01:47:42.586721887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:42.587347 containerd[2699]: time="2025-02-14T01:47:42.587325844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 2.546489718s" Feb 14 01:47:42.587369 containerd[2699]: time="2025-02-14T01:47:42.587354564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 14 01:47:42.592656 containerd[2699]: time="2025-02-14T01:47:42.592631500Z" level=info msg="CreateContainer within sandbox \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 14 01:47:42.599033 containerd[2699]: time="2025-02-14T01:47:42.599003472Z" level=info msg="CreateContainer within sandbox \"ac0dd9b94046444ceb5a174e6e27805cdbe0586b63dff2ab19a3f852ddfb6e36\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"803223d2827fd6a7cfed6bc1bdd335dce0e85931269b6ce1fb7a1c0c344287b3\"" Feb 14 01:47:42.599392 containerd[2699]: time="2025-02-14T01:47:42.599364790Z" level=info msg="StartContainer for \"803223d2827fd6a7cfed6bc1bdd335dce0e85931269b6ce1fb7a1c0c344287b3\"" Feb 14 01:47:42.626858 systemd[1]: Started cri-containerd-803223d2827fd6a7cfed6bc1bdd335dce0e85931269b6ce1fb7a1c0c344287b3.scope - libcontainer container 803223d2827fd6a7cfed6bc1bdd335dce0e85931269b6ce1fb7a1c0c344287b3. Feb 14 01:47:42.646922 containerd[2699]: time="2025-02-14T01:47:42.646890535Z" level=info msg="StartContainer for \"803223d2827fd6a7cfed6bc1bdd335dce0e85931269b6ce1fb7a1c0c344287b3\" returns successfully" Feb 14 01:47:42.755152 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 14 01:47:42.755196 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 14 01:47:43.060200 kubelet[4120]: I0214 01:47:43.060150 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jcv5f" podStartSLOduration=0.847680313 podStartE2EDuration="8.060135885s" podCreationTimestamp="2025-02-14 01:47:35 +0000 UTC" firstStartedPulling="2025-02-14 01:47:35.37545687 +0000 UTC m=+14.444228818" lastFinishedPulling="2025-02-14 01:47:42.587912442 +0000 UTC m=+21.656684390" observedRunningTime="2025-02-14 01:47:43.059495928 +0000 UTC m=+22.128267916" watchObservedRunningTime="2025-02-14 01:47:43.060135885 +0000 UTC m=+22.128907873" Feb 14 01:47:44.049384 kubelet[4120]: I0214 01:47:44.049347 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:47:48.959471 kubelet[4120]: I0214 01:47:48.959382 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:47:49.139807 kernel: bpftool[6258]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 14 01:47:49.290168 systemd-networkd[2600]: vxlan.calico: Link UP Feb 14 01:47:49.290179 systemd-networkd[2600]: vxlan.calico: Gained carrier Feb 14 01:47:50.494253 systemd-networkd[2600]: vxlan.calico: Gained IPv6LL Feb 14 01:47:51.002837 containerd[2699]: time="2025-02-14T01:47:51.002779527Z" level=info msg="StopPodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\"" Feb 14 01:47:51.003115 containerd[2699]: time="2025-02-14T01:47:51.002889927Z" level=info msg="StopPodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\"" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.042 [INFO][6606] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.042 [INFO][6606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" iface="eth0" netns="/var/run/netns/cni-2c2b1901-b29d-d835-5396-1a87653f45dd" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.042 [INFO][6606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" iface="eth0" netns="/var/run/netns/cni-2c2b1901-b29d-d835-5396-1a87653f45dd" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" iface="eth0" netns="/var/run/netns/cni-2c2b1901-b29d-d835-5396-1a87653f45dd" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6606] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.073 [INFO][6636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.073 [INFO][6636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.073 [INFO][6636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.081 [WARNING][6636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.081 [INFO][6636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.082 [INFO][6636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:51.085280 containerd[2699]: 2025-02-14 01:47:51.084 [INFO][6606] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:47:51.085610 containerd[2699]: time="2025-02-14T01:47:51.085417478Z" level=info msg="TearDown network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" successfully" Feb 14 01:47:51.085610 containerd[2699]: time="2025-02-14T01:47:51.085442198Z" level=info msg="StopPodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" returns successfully" Feb 14 01:47:51.085958 containerd[2699]: time="2025-02-14T01:47:51.085935517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-2f27r,Uid:d4057f79-38ab-4790-ae6d-39417a81be01,Namespace:calico-apiserver,Attempt:1,}" Feb 14 01:47:51.087216 systemd[1]: run-netns-cni\x2d2c2b1901\x2db29d\x2dd835\x2d5396\x2d1a87653f45dd.mount: Deactivated successfully. Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.042 [INFO][6605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.042 [INFO][6605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" iface="eth0" netns="/var/run/netns/cni-35c098ee-9bc8-4648-098a-46a2b5f833be" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" iface="eth0" netns="/var/run/netns/cni-35c098ee-9bc8-4648-098a-46a2b5f833be" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" iface="eth0" netns="/var/run/netns/cni-35c098ee-9bc8-4648-098a-46a2b5f833be" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.043 [INFO][6605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.073 [INFO][6637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.073 [INFO][6637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.082 [INFO][6637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.102 [WARNING][6637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.102 [INFO][6637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.103 [INFO][6637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:51.105796 containerd[2699]: 2025-02-14 01:47:51.104 [INFO][6605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:47:51.106142 containerd[2699]: time="2025-02-14T01:47:51.105930946Z" level=info msg="TearDown network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" successfully" Feb 14 01:47:51.106142 containerd[2699]: time="2025-02-14T01:47:51.105956466Z" level=info msg="StopPodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" returns successfully" Feb 14 01:47:51.106386 containerd[2699]: time="2025-02-14T01:47:51.106362785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bf5gn,Uid:368a98f4-3c61-48c7-a03d-61e5961b1cc9,Namespace:calico-system,Attempt:1,}" Feb 14 01:47:51.107543 systemd[1]: run-netns-cni\x2d35c098ee\x2d9bc8\x2d4648\x2d098a\x2d46a2b5f833be.mount: Deactivated successfully. Feb 14 01:47:51.184505 systemd-networkd[2600]: cali43929a304c7: Link UP Feb 14 01:47:51.184689 systemd-networkd[2600]: cali43929a304c7: Gained carrier Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.118 [INFO][6674] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0 calico-apiserver-5f5c9f9c9f- calico-apiserver d4057f79-38ab-4790-ae6d-39417a81be01 748 0 2025-02-14 01:47:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f5c9f9c9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-385c1ddb28 calico-apiserver-5f5c9f9c9f-2f27r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43929a304c7 [] []}} ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.118 [INFO][6674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.141 [INFO][6723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" HandleID="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.159 [INFO][6723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" HandleID="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000443b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-385c1ddb28", "pod":"calico-apiserver-5f5c9f9c9f-2f27r", "timestamp":"2025-02-14 01:47:51.141478776 +0000 UTC"}, Hostname:"ci-4081.3.1-a-385c1ddb28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.159 [INFO][6723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.159 [INFO][6723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.159 [INFO][6723] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-385c1ddb28' Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.161 [INFO][6723] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.168 [INFO][6723] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.171 [INFO][6723] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.173 [INFO][6723] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.174 [INFO][6723] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.174 [INFO][6723] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.175 [INFO][6723] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98 Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.178 [INFO][6723] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.181 [INFO][6723] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.193/26] block=192.168.11.192/26 handle="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.181 [INFO][6723] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.193/26] handle="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.181 [INFO][6723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:51.191465 containerd[2699]: 2025-02-14 01:47:51.181 [INFO][6723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.193/26] IPv6=[] ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" HandleID="k8s-pod-network.69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.191925 containerd[2699]: 2025-02-14 01:47:51.183 [INFO][6674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4057f79-38ab-4790-ae6d-39417a81be01", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"", Pod:"calico-apiserver-5f5c9f9c9f-2f27r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43929a304c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:51.191925 containerd[2699]: 2025-02-14 01:47:51.183 [INFO][6674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.193/32] ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.191925 containerd[2699]: 2025-02-14 01:47:51.183 [INFO][6674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43929a304c7 ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.191925 containerd[2699]: 2025-02-14 01:47:51.184 [INFO][6674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.191925 containerd[2699]: 2025-02-14 01:47:51.184 [INFO][6674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4057f79-38ab-4790-ae6d-39417a81be01", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98", Pod:"calico-apiserver-5f5c9f9c9f-2f27r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43929a304c7", MAC:"d6:b8:b0:ab:02:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:51.191925 containerd[2699]: 2025-02-14 01:47:51.190 [INFO][6674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-2f27r" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:47:51.205053 containerd[2699]: time="2025-02-14T01:47:51.204737937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:51.205083 containerd[2699]: time="2025-02-14T01:47:51.205051936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:51.205083 containerd[2699]: time="2025-02-14T01:47:51.205065216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:51.205155 containerd[2699]: time="2025-02-14T01:47:51.205141015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:51.228864 systemd[1]: Started cri-containerd-69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98.scope - libcontainer container 69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98. Feb 14 01:47:51.251704 containerd[2699]: time="2025-02-14T01:47:51.251671538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-2f27r,Uid:d4057f79-38ab-4790-ae6d-39417a81be01,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98\"" Feb 14 01:47:51.252793 containerd[2699]: time="2025-02-14T01:47:51.252768255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 01:47:51.285662 systemd-networkd[2600]: calid9028f94f77: Link UP Feb 14 01:47:51.285851 systemd-networkd[2600]: calid9028f94f77: Gained carrier Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.138 [INFO][6700] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0 csi-node-driver- calico-system 368a98f4-3c61-48c7-a03d-61e5961b1cc9 749 0 2025-02-14 01:47:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-385c1ddb28 csi-node-driver-bf5gn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid9028f94f77 [] []}} ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.138 [INFO][6700] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.160 [INFO][6747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" HandleID="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.170 [INFO][6747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" HandleID="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000374280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-385c1ddb28", "pod":"csi-node-driver-bf5gn", "timestamp":"2025-02-14 01:47:51.160712848 +0000 UTC"}, Hostname:"ci-4081.3.1-a-385c1ddb28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.170 [INFO][6747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.181 [INFO][6747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.181 [INFO][6747] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-385c1ddb28' Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.262 [INFO][6747] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.266 [INFO][6747] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.272 [INFO][6747] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.274 [INFO][6747] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.275 [INFO][6747] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.275 [INFO][6747] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.276 [INFO][6747] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84 Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.279 [INFO][6747] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.282 [INFO][6747] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.194/26] block=192.168.11.192/26 handle="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.282 [INFO][6747] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.194/26] handle="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.282 [INFO][6747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:51.293660 containerd[2699]: 2025-02-14 01:47:51.282 [INFO][6747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.194/26] IPv6=[] ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" HandleID="k8s-pod-network.f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.294108 containerd[2699]: 2025-02-14 01:47:51.284 [INFO][6700] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"368a98f4-3c61-48c7-a03d-61e5961b1cc9", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"", Pod:"csi-node-driver-bf5gn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9028f94f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:51.294108 containerd[2699]: 2025-02-14 01:47:51.284 [INFO][6700] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.194/32] ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.294108 containerd[2699]: 2025-02-14 01:47:51.284 [INFO][6700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9028f94f77 ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.294108 containerd[2699]: 2025-02-14 01:47:51.285 [INFO][6700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.294108 containerd[2699]: 2025-02-14 01:47:51.286 [INFO][6700] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"368a98f4-3c61-48c7-a03d-61e5961b1cc9", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84", Pod:"csi-node-driver-bf5gn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9028f94f77", MAC:"36:ca:f9:11:84:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:51.294108 containerd[2699]: 2025-02-14 01:47:51.292 [INFO][6700] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84" Namespace="calico-system" Pod="csi-node-driver-bf5gn" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:47:51.306947 containerd[2699]: time="2025-02-14T01:47:51.306592799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:51.306981 containerd[2699]: time="2025-02-14T01:47:51.306942278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:51.306981 containerd[2699]: time="2025-02-14T01:47:51.306956118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:51.307058 containerd[2699]: time="2025-02-14T01:47:51.307040558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:51.330868 systemd[1]: Started cri-containerd-f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84.scope - libcontainer container f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84. Feb 14 01:47:51.346542 containerd[2699]: time="2025-02-14T01:47:51.346514178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bf5gn,Uid:368a98f4-3c61-48c7-a03d-61e5961b1cc9,Namespace:calico-system,Attempt:1,} returns sandbox id \"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84\"" Feb 14 01:47:52.002504 containerd[2699]: time="2025-02-14T01:47:52.002458201Z" level=info msg="StopPodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\"" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.038 [INFO][6900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.039 [INFO][6900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" iface="eth0" netns="/var/run/netns/cni-3bbb1e24-d29a-23c4-478d-6bed829801a4" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.039 [INFO][6900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" iface="eth0" netns="/var/run/netns/cni-3bbb1e24-d29a-23c4-478d-6bed829801a4" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.039 [INFO][6900] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" iface="eth0" netns="/var/run/netns/cni-3bbb1e24-d29a-23c4-478d-6bed829801a4" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.039 [INFO][6900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.039 [INFO][6900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.056 [INFO][6920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.056 [INFO][6920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.056 [INFO][6920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.064 [WARNING][6920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.064 [INFO][6920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.065 [INFO][6920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:52.067420 containerd[2699]: 2025-02-14 01:47:52.066 [INFO][6900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:47:52.067973 containerd[2699]: time="2025-02-14T01:47:52.067548207Z" level=info msg="TearDown network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" successfully" Feb 14 01:47:52.067973 containerd[2699]: time="2025-02-14T01:47:52.067573127Z" level=info msg="StopPodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" returns successfully" Feb 14 01:47:52.068017 containerd[2699]: time="2025-02-14T01:47:52.067961446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-cnrft,Uid:3ccf9c15-46bf-4f15-bf25-3067d3a65e25,Namespace:calico-apiserver,Attempt:1,}" Feb 14 01:47:52.089960 systemd[1]: run-netns-cni\x2d3bbb1e24\x2dd29a\x2d23c4\x2d478d\x2d6bed829801a4.mount: Deactivated successfully. Feb 14 01:47:52.151912 systemd-networkd[2600]: calid233b15ebd1: Link UP Feb 14 01:47:52.152466 systemd-networkd[2600]: calid233b15ebd1: Gained carrier Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.100 [INFO][6938] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0 calico-apiserver-5f5c9f9c9f- calico-apiserver 3ccf9c15-46bf-4f15-bf25-3067d3a65e25 762 0 2025-02-14 01:47:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f5c9f9c9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-385c1ddb28 calico-apiserver-5f5c9f9c9f-cnrft eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid233b15ebd1 [] []}} ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.100 [INFO][6938] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.123 [INFO][6965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" HandleID="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.132 [INFO][6965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" HandleID="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e6660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-385c1ddb28", "pod":"calico-apiserver-5f5c9f9c9f-cnrft", "timestamp":"2025-02-14 01:47:52.123296915 +0000 UTC"}, Hostname:"ci-4081.3.1-a-385c1ddb28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.132 [INFO][6965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.132 [INFO][6965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.132 [INFO][6965] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-385c1ddb28' Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.133 [INFO][6965] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.136 [INFO][6965] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.139 [INFO][6965] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.140 [INFO][6965] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.142 [INFO][6965] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.142 [INFO][6965] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.143 [INFO][6965] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239 Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.145 [INFO][6965] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.149 [INFO][6965] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.195/26] block=192.168.11.192/26 handle="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.149 [INFO][6965] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.195/26] handle="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.149 [INFO][6965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:52.159209 containerd[2699]: 2025-02-14 01:47:52.149 [INFO][6965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.195/26] IPv6=[] ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" HandleID="k8s-pod-network.de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.159813 containerd[2699]: 2025-02-14 01:47:52.150 [INFO][6938] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ccf9c15-46bf-4f15-bf25-3067d3a65e25", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"", Pod:"calico-apiserver-5f5c9f9c9f-cnrft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid233b15ebd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:52.159813 containerd[2699]: 2025-02-14 01:47:52.150 [INFO][6938] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.195/32] ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.159813 containerd[2699]: 2025-02-14 01:47:52.150 [INFO][6938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid233b15ebd1 ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.159813 containerd[2699]: 2025-02-14 01:47:52.152 [INFO][6938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.159813 containerd[2699]: 2025-02-14 01:47:52.152 [INFO][6938] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ccf9c15-46bf-4f15-bf25-3067d3a65e25", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239", Pod:"calico-apiserver-5f5c9f9c9f-cnrft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid233b15ebd1", MAC:"52:18:e4:10:7b:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:52.159813 containerd[2699]: 2025-02-14 01:47:52.157 [INFO][6938] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239" Namespace="calico-apiserver" Pod="calico-apiserver-5f5c9f9c9f-cnrft" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:47:52.172959 containerd[2699]: time="2025-02-14T01:47:52.172601038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:52.172993 containerd[2699]: time="2025-02-14T01:47:52.172959637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:52.172993 containerd[2699]: time="2025-02-14T01:47:52.172972797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:52.173079 containerd[2699]: time="2025-02-14T01:47:52.173054957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:52.202882 systemd[1]: Started cri-containerd-de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239.scope - libcontainer container de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239. Feb 14 01:47:52.226566 containerd[2699]: time="2025-02-14T01:47:52.226529430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5c9f9c9f-cnrft,Uid:3ccf9c15-46bf-4f15-bf25-3067d3a65e25,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239\"" Feb 14 01:47:52.247802 containerd[2699]: time="2025-02-14T01:47:52.247767380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.247866 containerd[2699]: time="2025-02-14T01:47:52.247838580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 14 01:47:52.248450 containerd[2699]: time="2025-02-14T01:47:52.248430098Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.250264 containerd[2699]: time="2025-02-14T01:47:52.250239974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.250961 containerd[2699]: time="2025-02-14T01:47:52.250941492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 998.142037ms" Feb 14 01:47:52.250987 containerd[2699]: time="2025-02-14T01:47:52.250963292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 14 01:47:52.251626 containerd[2699]: time="2025-02-14T01:47:52.251604371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 14 01:47:52.252423 containerd[2699]: time="2025-02-14T01:47:52.252399209Z" level=info msg="CreateContainer within sandbox \"69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 01:47:52.257128 containerd[2699]: time="2025-02-14T01:47:52.257069558Z" level=info msg="CreateContainer within sandbox \"69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"686c5131f536a647f690c3f865db27416b24511d2689f9558c457f38cf69728b\"" Feb 14 01:47:52.257412 containerd[2699]: time="2025-02-14T01:47:52.257388317Z" level=info msg="StartContainer for \"686c5131f536a647f690c3f865db27416b24511d2689f9558c457f38cf69728b\"" Feb 14 01:47:52.283863 systemd[1]: Started cri-containerd-686c5131f536a647f690c3f865db27416b24511d2689f9558c457f38cf69728b.scope - libcontainer container 686c5131f536a647f690c3f865db27416b24511d2689f9558c457f38cf69728b. Feb 14 01:47:52.307683 containerd[2699]: time="2025-02-14T01:47:52.307652238Z" level=info msg="StartContainer for \"686c5131f536a647f690c3f865db27416b24511d2689f9558c457f38cf69728b\" returns successfully" Feb 14 01:47:52.669865 systemd-networkd[2600]: cali43929a304c7: Gained IPv6LL Feb 14 01:47:52.759049 containerd[2699]: time="2025-02-14T01:47:52.759012089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.759110 containerd[2699]: time="2025-02-14T01:47:52.759078849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 14 01:47:52.759775 containerd[2699]: time="2025-02-14T01:47:52.759753647Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.761561 containerd[2699]: time="2025-02-14T01:47:52.761532683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.762257 containerd[2699]: time="2025-02-14T01:47:52.762225801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 510.59043ms" Feb 14 01:47:52.762286 containerd[2699]: time="2025-02-14T01:47:52.762261081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 14 01:47:52.763084 containerd[2699]: time="2025-02-14T01:47:52.763062479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 01:47:52.763985 containerd[2699]: time="2025-02-14T01:47:52.763959757Z" level=info msg="CreateContainer within sandbox \"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 14 01:47:52.771734 containerd[2699]: time="2025-02-14T01:47:52.771701939Z" level=info msg="CreateContainer within sandbox \"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4d0416530dd0bfac2bf5352b0065e2650b31e353cf9035b0de42eef89cac3c7e\"" Feb 14 01:47:52.772191 containerd[2699]: time="2025-02-14T01:47:52.772167937Z" level=info msg="StartContainer for \"4d0416530dd0bfac2bf5352b0065e2650b31e353cf9035b0de42eef89cac3c7e\"" Feb 14 01:47:52.800854 systemd[1]: Started cri-containerd-4d0416530dd0bfac2bf5352b0065e2650b31e353cf9035b0de42eef89cac3c7e.scope - libcontainer container 4d0416530dd0bfac2bf5352b0065e2650b31e353cf9035b0de42eef89cac3c7e. Feb 14 01:47:52.819484 containerd[2699]: time="2025-02-14T01:47:52.819453585Z" level=info msg="StartContainer for \"4d0416530dd0bfac2bf5352b0065e2650b31e353cf9035b0de42eef89cac3c7e\" returns successfully" Feb 14 01:47:52.868954 containerd[2699]: time="2025-02-14T01:47:52.868926268Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:52.869019 containerd[2699]: time="2025-02-14T01:47:52.868987148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 14 01:47:52.871551 containerd[2699]: time="2025-02-14T01:47:52.871516942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 108.421663ms" Feb 14 01:47:52.871606 containerd[2699]: time="2025-02-14T01:47:52.871550942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 14 01:47:52.872348 containerd[2699]: time="2025-02-14T01:47:52.872321060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 14 01:47:52.873113 containerd[2699]: time="2025-02-14T01:47:52.873089098Z" level=info msg="CreateContainer within sandbox \"de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 01:47:52.886700 containerd[2699]: time="2025-02-14T01:47:52.886666466Z" level=info msg="CreateContainer within sandbox \"de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de848ac01ac6063e4e4b6d3910acb350dfb1d32dec3e804534d47dc062f8cfa0\"" Feb 14 01:47:52.887019 containerd[2699]: time="2025-02-14T01:47:52.886998545Z" level=info msg="StartContainer for \"de848ac01ac6063e4e4b6d3910acb350dfb1d32dec3e804534d47dc062f8cfa0\"" Feb 14 01:47:52.914927 systemd[1]: Started cri-containerd-de848ac01ac6063e4e4b6d3910acb350dfb1d32dec3e804534d47dc062f8cfa0.scope - libcontainer container de848ac01ac6063e4e4b6d3910acb350dfb1d32dec3e804534d47dc062f8cfa0. Feb 14 01:47:52.926827 systemd-networkd[2600]: calid9028f94f77: Gained IPv6LL Feb 14 01:47:52.939061 containerd[2699]: time="2025-02-14T01:47:52.939027742Z" level=info msg="StartContainer for \"de848ac01ac6063e4e4b6d3910acb350dfb1d32dec3e804534d47dc062f8cfa0\" returns successfully" Feb 14 01:47:53.003316 containerd[2699]: time="2025-02-14T01:47:53.003291030Z" level=info msg="StopPodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\"" Feb 14 01:47:53.003403 containerd[2699]: time="2025-02-14T01:47:53.003292230Z" level=info msg="StopPodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\"" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.040 [INFO][7223] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7223] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" iface="eth0" netns="/var/run/netns/cni-894490f9-df08-c0c5-f8bd-35bf3adde0a9" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7223] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" iface="eth0" netns="/var/run/netns/cni-894490f9-df08-c0c5-f8bd-35bf3adde0a9" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7223] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" iface="eth0" netns="/var/run/netns/cni-894490f9-df08-c0c5-f8bd-35bf3adde0a9" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7223] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.058 [INFO][7270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.058 [INFO][7270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.058 [INFO][7270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.066 [WARNING][7270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.066 [INFO][7270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.067 [INFO][7270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:53.069459 containerd[2699]: 2025-02-14 01:47:53.068 [INFO][7223] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:47:53.070016 containerd[2699]: time="2025-02-14T01:47:53.069628923Z" level=info msg="TearDown network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" successfully" Feb 14 01:47:53.070016 containerd[2699]: time="2025-02-14T01:47:53.069662483Z" level=info msg="StopPodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" returns successfully" Feb 14 01:47:53.070059 containerd[2699]: time="2025-02-14T01:47:53.070032242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h88pb,Uid:66e17b36-25ed-486f-b40e-ad1476b372c7,Namespace:kube-system,Attempt:1,}" Feb 14 01:47:53.072951 kubelet[4120]: I0214 01:47:53.072895 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-2f27r" podStartSLOduration=19.073936201 podStartE2EDuration="20.072878556s" podCreationTimestamp="2025-02-14 01:47:33 +0000 UTC" firstStartedPulling="2025-02-14 01:47:51.252567936 +0000 UTC m=+30.321339884" lastFinishedPulling="2025-02-14 01:47:52.251510251 +0000 UTC m=+31.320282239" observedRunningTime="2025-02-14 01:47:53.072870476 +0000 UTC m=+32.141642464" watchObservedRunningTime="2025-02-14 01:47:53.072878556 +0000 UTC m=+32.141650504" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.040 [INFO][7222] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.040 [INFO][7222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" iface="eth0" netns="/var/run/netns/cni-d7b2721e-93a9-a165-2d64-fd5c6829e857" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.040 [INFO][7222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" iface="eth0" netns="/var/run/netns/cni-d7b2721e-93a9-a165-2d64-fd5c6829e857" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" iface="eth0" netns="/var/run/netns/cni-d7b2721e-93a9-a165-2d64-fd5c6829e857" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7222] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.041 [INFO][7222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.058 [INFO][7269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.058 [INFO][7269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.067 [INFO][7269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.074 [WARNING][7269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.074 [INFO][7269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.075 [INFO][7269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:53.078364 containerd[2699]: 2025-02-14 01:47:53.076 [INFO][7222] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:47:53.078634 containerd[2699]: time="2025-02-14T01:47:53.078566183Z" level=info msg="TearDown network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" successfully" Feb 14 01:47:53.078634 containerd[2699]: time="2025-02-14T01:47:53.078590183Z" level=info msg="StopPodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" returns successfully" Feb 14 01:47:53.079073 containerd[2699]: time="2025-02-14T01:47:53.079048182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c94569b7-pl64x,Uid:7a7c0e06-d39c-44c9-a08b-42e6e8f22180,Namespace:calico-system,Attempt:1,}" Feb 14 01:47:53.080154 kubelet[4120]: I0214 01:47:53.080110 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f5c9f9c9f-cnrft" podStartSLOduration=19.435349427 podStartE2EDuration="20.0800947s" podCreationTimestamp="2025-02-14 01:47:33 +0000 UTC" firstStartedPulling="2025-02-14 01:47:52.227367948 +0000 UTC m=+31.296139936" lastFinishedPulling="2025-02-14 01:47:52.872113221 +0000 UTC m=+31.940885209" observedRunningTime="2025-02-14 01:47:53.079735501 +0000 UTC m=+32.148507489" watchObservedRunningTime="2025-02-14 01:47:53.0800947 +0000 UTC m=+32.148866688" Feb 14 01:47:53.091398 systemd[1]: run-netns-cni\x2dd7b2721e\x2d93a9\x2da165\x2d2d64\x2dfd5c6829e857.mount: Deactivated successfully. Feb 14 01:47:53.091479 systemd[1]: run-netns-cni\x2d894490f9\x2ddf08\x2dc0c5\x2df8bd\x2d35bf3adde0a9.mount: Deactivated successfully. Feb 14 01:47:53.178925 systemd-networkd[2600]: cali3426814b1ef: Link UP Feb 14 01:47:53.179318 systemd-networkd[2600]: cali3426814b1ef: Gained carrier Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.109 [INFO][7336] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0 calico-kube-controllers-67c94569b7- calico-system 7a7c0e06-d39c-44c9-a08b-42e6e8f22180 781 0 2025-02-14 01:47:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67c94569b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-385c1ddb28 calico-kube-controllers-67c94569b7-pl64x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3426814b1ef [] []}} ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.109 [INFO][7336] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.133 [INFO][7390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" HandleID="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.142 [INFO][7390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" HandleID="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-385c1ddb28", "pod":"calico-kube-controllers-67c94569b7-pl64x", "timestamp":"2025-02-14 01:47:53.13379666 +0000 UTC"}, Hostname:"ci-4081.3.1-a-385c1ddb28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.142 [INFO][7390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.142 [INFO][7390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.142 [INFO][7390] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-385c1ddb28' Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.143 [INFO][7390] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.146 [INFO][7390] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.149 [INFO][7390] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.151 [INFO][7390] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.153 [INFO][7390] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.153 [INFO][7390] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.156 [INFO][7390] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32 Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.158 [INFO][7390] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.174 [INFO][7390] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.196/26] block=192.168.11.192/26 handle="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.175 [INFO][7390] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.196/26] handle="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.175 [INFO][7390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:53.187020 containerd[2699]: 2025-02-14 01:47:53.175 [INFO][7390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.196/26] IPv6=[] ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" HandleID="k8s-pod-network.645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.187558 containerd[2699]: 2025-02-14 01:47:53.177 [INFO][7336] cni-plugin/k8s.go 386: Populated endpoint ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0", GenerateName:"calico-kube-controllers-67c94569b7-", Namespace:"calico-system", SelfLink:"", UID:"7a7c0e06-d39c-44c9-a08b-42e6e8f22180", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c94569b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"", Pod:"calico-kube-controllers-67c94569b7-pl64x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3426814b1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:53.187558 containerd[2699]: 2025-02-14 01:47:53.177 [INFO][7336] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.196/32] ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.187558 containerd[2699]: 2025-02-14 01:47:53.177 [INFO][7336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3426814b1ef ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.187558 containerd[2699]: 2025-02-14 01:47:53.179 [INFO][7336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.187558 containerd[2699]: 2025-02-14 01:47:53.179 [INFO][7336] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0", GenerateName:"calico-kube-controllers-67c94569b7-", Namespace:"calico-system", SelfLink:"", UID:"7a7c0e06-d39c-44c9-a08b-42e6e8f22180", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c94569b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32", Pod:"calico-kube-controllers-67c94569b7-pl64x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3426814b1ef", MAC:"06:ec:fa:b5:68:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:53.187558 containerd[2699]: 2025-02-14 01:47:53.185 [INFO][7336] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32" Namespace="calico-system" Pod="calico-kube-controllers-67c94569b7-pl64x" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:47:53.201736 containerd[2699]: time="2025-02-14T01:47:53.201678150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:53.201736 containerd[2699]: time="2025-02-14T01:47:53.201728950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:53.201808 containerd[2699]: time="2025-02-14T01:47:53.201740070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:53.201834 containerd[2699]: time="2025-02-14T01:47:53.201817549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:53.233869 systemd[1]: Started cri-containerd-645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32.scope - libcontainer container 645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32. Feb 14 01:47:53.256619 containerd[2699]: time="2025-02-14T01:47:53.256591548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67c94569b7-pl64x,Uid:7a7c0e06-d39c-44c9-a08b-42e6e8f22180,Namespace:calico-system,Attempt:1,} returns sandbox id \"645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32\"" Feb 14 01:47:53.269758 systemd-networkd[2600]: cali6a9ab543de4: Link UP Feb 14 01:47:53.270209 systemd-networkd[2600]: cali6a9ab543de4: Gained carrier Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.101 [INFO][7317] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0 coredns-6f6b679f8f- kube-system 66e17b36-25ed-486f-b40e-ad1476b372c7 782 0 2025-02-14 01:47:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-385c1ddb28 coredns-6f6b679f8f-h88pb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a9ab543de4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.101 [INFO][7317] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.125 [INFO][7374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" HandleID="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.148 [INFO][7374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" HandleID="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032f700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-385c1ddb28", "pod":"coredns-6f6b679f8f-h88pb", "timestamp":"2025-02-14 01:47:53.12506512 +0000 UTC"}, Hostname:"ci-4081.3.1-a-385c1ddb28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.148 [INFO][7374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.175 [INFO][7374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.175 [INFO][7374] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-385c1ddb28' Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.245 [INFO][7374] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.251 [INFO][7374] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.255 [INFO][7374] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.256 [INFO][7374] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.259 [INFO][7374] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.259 [INFO][7374] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.260 [INFO][7374] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4 Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.262 [INFO][7374] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.266 [INFO][7374] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.197/26] block=192.168.11.192/26 handle="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.266 [INFO][7374] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.197/26] handle="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.266 [INFO][7374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:53.277311 containerd[2699]: 2025-02-14 01:47:53.266 [INFO][7374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.197/26] IPv6=[] ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" HandleID="k8s-pod-network.6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.277734 containerd[2699]: 2025-02-14 01:47:53.267 [INFO][7317] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"66e17b36-25ed-486f-b40e-ad1476b372c7", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"", Pod:"coredns-6f6b679f8f-h88pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a9ab543de4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:53.277734 containerd[2699]: 2025-02-14 01:47:53.267 [INFO][7317] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.197/32] ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.277734 containerd[2699]: 2025-02-14 01:47:53.267 [INFO][7317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a9ab543de4 ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.277734 containerd[2699]: 2025-02-14 01:47:53.269 [INFO][7317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.277734 containerd[2699]: 2025-02-14 01:47:53.270 [INFO][7317] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"66e17b36-25ed-486f-b40e-ad1476b372c7", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4", Pod:"coredns-6f6b679f8f-h88pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a9ab543de4", MAC:"2a:6d:9b:12:37:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:53.277734 containerd[2699]: 2025-02-14 01:47:53.275 [INFO][7317] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4" Namespace="kube-system" Pod="coredns-6f6b679f8f-h88pb" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:47:53.291293 containerd[2699]: time="2025-02-14T01:47:53.291230471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:53.291317 containerd[2699]: time="2025-02-14T01:47:53.291288391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:53.291317 containerd[2699]: time="2025-02-14T01:47:53.291300231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:53.291395 containerd[2699]: time="2025-02-14T01:47:53.291375591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:53.308875 systemd[1]: Started cri-containerd-6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4.scope - libcontainer container 6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4. Feb 14 01:47:53.310816 systemd-networkd[2600]: calid233b15ebd1: Gained IPv6LL Feb 14 01:47:53.332839 containerd[2699]: time="2025-02-14T01:47:53.332811458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h88pb,Uid:66e17b36-25ed-486f-b40e-ad1476b372c7,Namespace:kube-system,Attempt:1,} returns sandbox id \"6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4\"" Feb 14 01:47:53.334703 containerd[2699]: time="2025-02-14T01:47:53.334678054Z" level=info msg="CreateContainer within sandbox \"6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 01:47:53.349528 containerd[2699]: time="2025-02-14T01:47:53.349493341Z" level=info msg="CreateContainer within sandbox \"6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e204f214dd450b2214eb4b9c30b97879ab4e728987e88257357a9b8394900a4\"" Feb 14 01:47:53.349891 containerd[2699]: time="2025-02-14T01:47:53.349864781Z" level=info msg="StartContainer for \"8e204f214dd450b2214eb4b9c30b97879ab4e728987e88257357a9b8394900a4\"" Feb 14 01:47:53.381932 systemd[1]: Started cri-containerd-8e204f214dd450b2214eb4b9c30b97879ab4e728987e88257357a9b8394900a4.scope - libcontainer container 8e204f214dd450b2214eb4b9c30b97879ab4e728987e88257357a9b8394900a4. Feb 14 01:47:53.399212 containerd[2699]: time="2025-02-14T01:47:53.399179111Z" level=info msg="StartContainer for \"8e204f214dd450b2214eb4b9c30b97879ab4e728987e88257357a9b8394900a4\" returns successfully" Feb 14 01:47:53.411571 containerd[2699]: time="2025-02-14T01:47:53.411544484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:53.411641 containerd[2699]: time="2025-02-14T01:47:53.411614923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 14 01:47:53.412360 containerd[2699]: time="2025-02-14T01:47:53.412339362Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:53.414145 containerd[2699]: time="2025-02-14T01:47:53.414117038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:53.414882 containerd[2699]: time="2025-02-14T01:47:53.414856156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 542.498856ms" Feb 14 01:47:53.414906 containerd[2699]: time="2025-02-14T01:47:53.414888996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 14 01:47:53.415625 containerd[2699]: time="2025-02-14T01:47:53.415610115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 14 01:47:53.416512 containerd[2699]: time="2025-02-14T01:47:53.416484473Z" level=info msg="CreateContainer within sandbox \"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 14 01:47:53.422003 containerd[2699]: time="2025-02-14T01:47:53.421977060Z" level=info msg="CreateContainer within sandbox \"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f1ff52a68a6c24f3cc0f7b070c0853101a15dda0ffd787617d2c935880dc43b6\"" Feb 14 01:47:53.422318 containerd[2699]: time="2025-02-14T01:47:53.422298220Z" level=info msg="StartContainer for \"f1ff52a68a6c24f3cc0f7b070c0853101a15dda0ffd787617d2c935880dc43b6\"" Feb 14 01:47:53.455872 systemd[1]: Started cri-containerd-f1ff52a68a6c24f3cc0f7b070c0853101a15dda0ffd787617d2c935880dc43b6.scope - libcontainer container f1ff52a68a6c24f3cc0f7b070c0853101a15dda0ffd787617d2c935880dc43b6. Feb 14 01:47:53.474975 containerd[2699]: time="2025-02-14T01:47:53.474943583Z" level=info msg="StartContainer for \"f1ff52a68a6c24f3cc0f7b070c0853101a15dda0ffd787617d2c935880dc43b6\" returns successfully" Feb 14 01:47:54.002974 containerd[2699]: time="2025-02-14T01:47:54.002651371Z" level=info msg="StopPodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\"" Feb 14 01:47:54.050235 kubelet[4120]: I0214 01:47:54.050182 4120 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 14 01:47:54.050235 kubelet[4120]: I0214 01:47:54.050225 4120 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.041 [INFO][7646] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.042 [INFO][7646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" iface="eth0" netns="/var/run/netns/cni-6b1a0b54-05d5-20aa-af18-1c2fd1e4242d" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.042 [INFO][7646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" iface="eth0" netns="/var/run/netns/cni-6b1a0b54-05d5-20aa-af18-1c2fd1e4242d" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.042 [INFO][7646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" iface="eth0" netns="/var/run/netns/cni-6b1a0b54-05d5-20aa-af18-1c2fd1e4242d" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.042 [INFO][7646] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.042 [INFO][7646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.059 [INFO][7674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.059 [INFO][7674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.059 [INFO][7674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.066 [WARNING][7674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.066 [INFO][7674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.067 [INFO][7674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:54.070131 containerd[2699]: 2025-02-14 01:47:54.068 [INFO][7646] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:47:54.070676 containerd[2699]: time="2025-02-14T01:47:54.070293150Z" level=info msg="TearDown network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" successfully" Feb 14 01:47:54.070676 containerd[2699]: time="2025-02-14T01:47:54.070318990Z" level=info msg="StopPodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" returns successfully" Feb 14 01:47:54.070720 containerd[2699]: time="2025-02-14T01:47:54.070696149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnmrh,Uid:3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1,Namespace:kube-system,Attempt:1,}" Feb 14 01:47:54.073226 kubelet[4120]: I0214 01:47:54.073206 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:47:54.073418 kubelet[4120]: I0214 01:47:54.073209 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:47:54.083284 kubelet[4120]: I0214 01:47:54.083240 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-h88pb" podStartSLOduration=27.083225283 podStartE2EDuration="27.083225283s" podCreationTimestamp="2025-02-14 01:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:47:54.083044764 +0000 UTC m=+33.151816752" watchObservedRunningTime="2025-02-14 01:47:54.083225283 +0000 UTC m=+33.151997231" Feb 14 01:47:54.090501 systemd[1]: run-netns-cni\x2d6b1a0b54\x2d05d5\x2d20aa\x2daf18\x2d1c2fd1e4242d.mount: Deactivated successfully. Feb 14 01:47:54.091642 kubelet[4120]: I0214 01:47:54.091595 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bf5gn" podStartSLOduration=17.023394247 podStartE2EDuration="19.091579066s" podCreationTimestamp="2025-02-14 01:47:35 +0000 UTC" firstStartedPulling="2025-02-14 01:47:51.347300696 +0000 UTC m=+30.416072644" lastFinishedPulling="2025-02-14 01:47:53.415485515 +0000 UTC m=+32.484257463" observedRunningTime="2025-02-14 01:47:54.091181907 +0000 UTC m=+33.159953895" watchObservedRunningTime="2025-02-14 01:47:54.091579066 +0000 UTC m=+33.160351054" Feb 14 01:47:54.170933 systemd-networkd[2600]: cali67d2400c746: Link UP Feb 14 01:47:54.171266 systemd-networkd[2600]: cali67d2400c746: Gained carrier Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.104 [INFO][7694] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0 coredns-6f6b679f8f- kube-system 3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1 804 0 2025-02-14 01:47:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-385c1ddb28 coredns-6f6b679f8f-rnmrh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67d2400c746 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.105 [INFO][7694] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.128 [INFO][7723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" HandleID="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.150 [INFO][7723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" HandleID="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cbaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-385c1ddb28", "pod":"coredns-6f6b679f8f-rnmrh", "timestamp":"2025-02-14 01:47:54.128534669 +0000 UTC"}, Hostname:"ci-4081.3.1-a-385c1ddb28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.150 [INFO][7723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.150 [INFO][7723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.150 [INFO][7723] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-385c1ddb28' Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.152 [INFO][7723] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.154 [INFO][7723] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.157 [INFO][7723] ipam/ipam.go 489: Trying affinity for 192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.159 [INFO][7723] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.160 [INFO][7723] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.160 [INFO][7723] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.162 [INFO][7723] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.164 [INFO][7723] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.168 [INFO][7723] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.198/26] block=192.168.11.192/26 handle="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.168 [INFO][7723] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.198/26] handle="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" host="ci-4081.3.1-a-385c1ddb28" Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.168 [INFO][7723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:47:54.178662 containerd[2699]: 2025-02-14 01:47:54.168 [INFO][7723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.198/26] IPv6=[] ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" HandleID="k8s-pod-network.6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.179072 containerd[2699]: 2025-02-14 01:47:54.169 [INFO][7694] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"", Pod:"coredns-6f6b679f8f-rnmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67d2400c746", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:54.179072 containerd[2699]: 2025-02-14 01:47:54.169 [INFO][7694] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.198/32] ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.179072 containerd[2699]: 2025-02-14 01:47:54.169 [INFO][7694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67d2400c746 ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.179072 containerd[2699]: 2025-02-14 01:47:54.171 [INFO][7694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.179072 containerd[2699]: 2025-02-14 01:47:54.171 [INFO][7694] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a", Pod:"coredns-6f6b679f8f-rnmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67d2400c746", MAC:"02:6d:51:7b:61:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:47:54.179072 containerd[2699]: 2025-02-14 01:47:54.176 [INFO][7694] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnmrh" WorkloadEndpoint="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:47:54.192622 containerd[2699]: time="2025-02-14T01:47:54.192552536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 01:47:54.192653 containerd[2699]: time="2025-02-14T01:47:54.192616656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 01:47:54.192653 containerd[2699]: time="2025-02-14T01:47:54.192631935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:54.192731 containerd[2699]: time="2025-02-14T01:47:54.192712855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 01:47:54.214964 systemd[1]: Started cri-containerd-6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a.scope - libcontainer container 6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a. Feb 14 01:47:54.224618 containerd[2699]: time="2025-02-14T01:47:54.224580589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:54.224679 containerd[2699]: time="2025-02-14T01:47:54.224651869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 14 01:47:54.226605 containerd[2699]: time="2025-02-14T01:47:54.226560625Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:54.228936 containerd[2699]: time="2025-02-14T01:47:54.228911300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 01:47:54.229641 containerd[2699]: time="2025-02-14T01:47:54.229613898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 813.975024ms" Feb 14 01:47:54.229676 containerd[2699]: time="2025-02-14T01:47:54.229648898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 14 01:47:54.235096 containerd[2699]: time="2025-02-14T01:47:54.235065447Z" level=info msg="CreateContainer within sandbox \"645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 14 01:47:54.240113 containerd[2699]: time="2025-02-14T01:47:54.240086877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnmrh,Uid:3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1,Namespace:kube-system,Attempt:1,} returns sandbox id \"6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a\"" Feb 14 01:47:54.241908 containerd[2699]: time="2025-02-14T01:47:54.241885113Z" level=info msg="CreateContainer within sandbox \"6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 01:47:54.256920 containerd[2699]: time="2025-02-14T01:47:54.256847482Z" level=info msg="CreateContainer within sandbox \"645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"aced3efd608c9ef1e4ae50af7725dcd17669d3fea559acc202424654d6c33c6b\"" Feb 14 01:47:54.257195 containerd[2699]: time="2025-02-14T01:47:54.257176161Z" level=info msg="StartContainer for \"aced3efd608c9ef1e4ae50af7725dcd17669d3fea559acc202424654d6c33c6b\"" Feb 14 01:47:54.258079 containerd[2699]: time="2025-02-14T01:47:54.258050199Z" level=info msg="CreateContainer within sandbox \"6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a680f351d485c48e6eb22656e1a89431fe2a7f28a5525506fee28f7f3e0a253\"" Feb 14 01:47:54.258373 containerd[2699]: time="2025-02-14T01:47:54.258348479Z" level=info msg="StartContainer for \"5a680f351d485c48e6eb22656e1a89431fe2a7f28a5525506fee28f7f3e0a253\"" Feb 14 01:47:54.270832 systemd-networkd[2600]: cali3426814b1ef: Gained IPv6LL Feb 14 01:47:54.289924 systemd[1]: Started cri-containerd-5a680f351d485c48e6eb22656e1a89431fe2a7f28a5525506fee28f7f3e0a253.scope - libcontainer container 5a680f351d485c48e6eb22656e1a89431fe2a7f28a5525506fee28f7f3e0a253. Feb 14 01:47:54.291119 systemd[1]: Started cri-containerd-aced3efd608c9ef1e4ae50af7725dcd17669d3fea559acc202424654d6c33c6b.scope - libcontainer container aced3efd608c9ef1e4ae50af7725dcd17669d3fea559acc202424654d6c33c6b. Feb 14 01:47:54.307622 containerd[2699]: time="2025-02-14T01:47:54.307588576Z" level=info msg="StartContainer for \"5a680f351d485c48e6eb22656e1a89431fe2a7f28a5525506fee28f7f3e0a253\" returns successfully" Feb 14 01:47:54.314778 containerd[2699]: time="2025-02-14T01:47:54.314753241Z" level=info msg="StartContainer for \"aced3efd608c9ef1e4ae50af7725dcd17669d3fea559acc202424654d6c33c6b\" returns successfully" Feb 14 01:47:55.037969 systemd-networkd[2600]: cali6a9ab543de4: Gained IPv6LL Feb 14 01:47:55.084070 kubelet[4120]: I0214 01:47:55.084025 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67c94569b7-pl64x" podStartSLOduration=19.111173099 podStartE2EDuration="20.08401089s" podCreationTimestamp="2025-02-14 01:47:35 +0000 UTC" firstStartedPulling="2025-02-14 01:47:53.257444906 +0000 UTC m=+32.326216894" lastFinishedPulling="2025-02-14 01:47:54.230282697 +0000 UTC m=+33.299054685" observedRunningTime="2025-02-14 01:47:55.083497651 +0000 UTC m=+34.152269639" watchObservedRunningTime="2025-02-14 01:47:55.08401089 +0000 UTC m=+34.152782838" Feb 14 01:47:55.090374 kubelet[4120]: I0214 01:47:55.090333 4120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rnmrh" podStartSLOduration=28.090321518 podStartE2EDuration="28.090321518s" podCreationTimestamp="2025-02-14 01:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 01:47:55.090149078 +0000 UTC m=+34.158921066" watchObservedRunningTime="2025-02-14 01:47:55.090321518 +0000 UTC m=+34.159093466" Feb 14 01:47:55.229861 systemd-networkd[2600]: cali67d2400c746: Gained IPv6LL Feb 14 01:47:59.123400 kubelet[4120]: I0214 01:47:59.123360 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:48:15.791944 kubelet[4120]: I0214 01:48:15.791835 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:48:20.993680 containerd[2699]: time="2025-02-14T01:48:20.993626575Z" level=info msg="StopPodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\"" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.027 [WARNING][8090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"66e17b36-25ed-486f-b40e-ad1476b372c7", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4", Pod:"coredns-6f6b679f8f-h88pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a9ab543de4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.027 [INFO][8090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.027 [INFO][8090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" iface="eth0" netns="" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.027 [INFO][8090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.027 [INFO][8090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.044 [INFO][8116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.044 [INFO][8116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.044 [INFO][8116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.051 [WARNING][8116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.051 [INFO][8116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.052 [INFO][8116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.055225 containerd[2699]: 2025-02-14 01:48:21.054 [INFO][8090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.055631 containerd[2699]: time="2025-02-14T01:48:21.055242753Z" level=info msg="TearDown network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" successfully" Feb 14 01:48:21.055631 containerd[2699]: time="2025-02-14T01:48:21.055273793Z" level=info msg="StopPodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" returns successfully" Feb 14 01:48:21.055631 containerd[2699]: time="2025-02-14T01:48:21.055594113Z" level=info msg="RemovePodSandbox for \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\"" Feb 14 01:48:21.055631 containerd[2699]: time="2025-02-14T01:48:21.055624193Z" level=info msg="Forcibly stopping sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\"" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.087 [WARNING][8146] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"66e17b36-25ed-486f-b40e-ad1476b372c7", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"6d32526057fc84c51d4bdccd36428cd7ec83854c02f4b29f074493505c225cb4", Pod:"coredns-6f6b679f8f-h88pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a9ab543de4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.087 [INFO][8146] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.087 [INFO][8146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" iface="eth0" netns="" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.087 [INFO][8146] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.087 [INFO][8146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.104 [INFO][8167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.104 [INFO][8167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.104 [INFO][8167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.113 [WARNING][8167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.113 [INFO][8167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" HandleID="k8s-pod-network.172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--h88pb-eth0" Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.115 [INFO][8167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.117370 containerd[2699]: 2025-02-14 01:48:21.116 [INFO][8146] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe" Feb 14 01:48:21.117725 containerd[2699]: time="2025-02-14T01:48:21.117413370Z" level=info msg="TearDown network for sandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" successfully" Feb 14 01:48:21.123238 containerd[2699]: time="2025-02-14T01:48:21.123213128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:48:21.123308 containerd[2699]: time="2025-02-14T01:48:21.123262768Z" level=info msg="RemovePodSandbox \"172497d1ec9377d2ed57acb7d796ea84503638f0edd79ee2ee38747ed155b2fe\" returns successfully" Feb 14 01:48:21.123583 containerd[2699]: time="2025-02-14T01:48:21.123560928Z" level=info msg="StopPodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\"" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.156 [WARNING][8202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"368a98f4-3c61-48c7-a03d-61e5961b1cc9", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84", Pod:"csi-node-driver-bf5gn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9028f94f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.156 [INFO][8202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.156 [INFO][8202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" iface="eth0" netns="" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.156 [INFO][8202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.156 [INFO][8202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.173 [INFO][8221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.174 [INFO][8221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.174 [INFO][8221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.181 [WARNING][8221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.181 [INFO][8221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.182 [INFO][8221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.184416 containerd[2699]: 2025-02-14 01:48:21.183 [INFO][8202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.184826 containerd[2699]: time="2025-02-14T01:48:21.184447826Z" level=info msg="TearDown network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" successfully" Feb 14 01:48:21.184826 containerd[2699]: time="2025-02-14T01:48:21.184470666Z" level=info msg="StopPodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" returns successfully" Feb 14 01:48:21.184826 containerd[2699]: time="2025-02-14T01:48:21.184715626Z" level=info msg="RemovePodSandbox for \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\"" Feb 14 01:48:21.184826 containerd[2699]: time="2025-02-14T01:48:21.184741946Z" level=info msg="Forcibly stopping sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\"" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.216 [WARNING][8254] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"368a98f4-3c61-48c7-a03d-61e5961b1cc9", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"f535597921f06d9217b9e6c5385081fdfd5f9717da4cdf191ffa39d33ca52f84", Pod:"csi-node-driver-bf5gn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid9028f94f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.216 [INFO][8254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.216 [INFO][8254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" iface="eth0" netns="" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.216 [INFO][8254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.216 [INFO][8254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.233 [INFO][8273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.233 [INFO][8273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.233 [INFO][8273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.242 [WARNING][8273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.242 [INFO][8273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" HandleID="k8s-pod-network.6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Workload="ci--4081.3.1--a--385c1ddb28-k8s-csi--node--driver--bf5gn-eth0" Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.243 [INFO][8273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.246071 containerd[2699]: 2025-02-14 01:48:21.244 [INFO][8254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3" Feb 14 01:48:21.246414 containerd[2699]: time="2025-02-14T01:48:21.246069843Z" level=info msg="TearDown network for sandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" successfully" Feb 14 01:48:21.247627 containerd[2699]: time="2025-02-14T01:48:21.247599803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:48:21.247701 containerd[2699]: time="2025-02-14T01:48:21.247651283Z" level=info msg="RemovePodSandbox \"6ba593bcf86fe36a3169f58c3da48bd8822731cc393f45ebba309023110d4fd3\" returns successfully" Feb 14 01:48:21.248016 containerd[2699]: time="2025-02-14T01:48:21.247993763Z" level=info msg="StopPodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\"" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.279 [WARNING][8307] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a", Pod:"coredns-6f6b679f8f-rnmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67d2400c746", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.279 [INFO][8307] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.279 [INFO][8307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" iface="eth0" netns="" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.279 [INFO][8307] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.279 [INFO][8307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.296 [INFO][8328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.297 [INFO][8328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.297 [INFO][8328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.304 [WARNING][8328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.304 [INFO][8328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.305 [INFO][8328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.307712 containerd[2699]: 2025-02-14 01:48:21.306 [INFO][8307] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.308066 containerd[2699]: time="2025-02-14T01:48:21.307757621Z" level=info msg="TearDown network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" successfully" Feb 14 01:48:21.308066 containerd[2699]: time="2025-02-14T01:48:21.307780541Z" level=info msg="StopPodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" returns successfully" Feb 14 01:48:21.308145 containerd[2699]: time="2025-02-14T01:48:21.308122501Z" level=info msg="RemovePodSandbox for \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\"" Feb 14 01:48:21.308173 containerd[2699]: time="2025-02-14T01:48:21.308152861Z" level=info msg="Forcibly stopping sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\"" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.339 [WARNING][8361] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3b0ed4b7-1cef-42d9-9eba-8a303a0a9ff1", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"6a15e72d507aff7af5f33dfe902e8bcfd8e683eb6e020a4e6b60e3017d645f2a", Pod:"coredns-6f6b679f8f-rnmrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67d2400c746", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.340 [INFO][8361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.340 [INFO][8361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" iface="eth0" netns="" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.340 [INFO][8361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.340 [INFO][8361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.357 [INFO][8384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.357 [INFO][8384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.357 [INFO][8384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.365 [WARNING][8384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.365 [INFO][8384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" HandleID="k8s-pod-network.bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Workload="ci--4081.3.1--a--385c1ddb28-k8s-coredns--6f6b679f8f--rnmrh-eth0" Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.366 [INFO][8384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.369009 containerd[2699]: 2025-02-14 01:48:21.367 [INFO][8361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae" Feb 14 01:48:21.369327 containerd[2699]: time="2025-02-14T01:48:21.369042158Z" level=info msg="TearDown network for sandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" successfully" Feb 14 01:48:21.370707 containerd[2699]: time="2025-02-14T01:48:21.370678718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:48:21.370741 containerd[2699]: time="2025-02-14T01:48:21.370732198Z" level=info msg="RemovePodSandbox \"bd084f7482c64449436e28e8686d76464fd524d2dae583e4747403e4ad4cc4ae\" returns successfully" Feb 14 01:48:21.371114 containerd[2699]: time="2025-02-14T01:48:21.371092358Z" level=info msg="StopPodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\"" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.401 [WARNING][8421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4057f79-38ab-4790-ae6d-39417a81be01", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98", Pod:"calico-apiserver-5f5c9f9c9f-2f27r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43929a304c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.402 [INFO][8421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.402 [INFO][8421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" iface="eth0" netns="" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.402 [INFO][8421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.402 [INFO][8421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.418 [INFO][8441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.419 [INFO][8441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.419 [INFO][8441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.426 [WARNING][8441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.426 [INFO][8441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.427 [INFO][8441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.429778 containerd[2699]: 2025-02-14 01:48:21.428 [INFO][8421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.430157 containerd[2699]: time="2025-02-14T01:48:21.429830056Z" level=info msg="TearDown network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" successfully" Feb 14 01:48:21.430157 containerd[2699]: time="2025-02-14T01:48:21.429866016Z" level=info msg="StopPodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" returns successfully" Feb 14 01:48:21.430248 containerd[2699]: time="2025-02-14T01:48:21.430221976Z" level=info msg="RemovePodSandbox for \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\"" Feb 14 01:48:21.430269 containerd[2699]: time="2025-02-14T01:48:21.430258056Z" level=info msg="Forcibly stopping sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\"" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.460 [WARNING][8474] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4057f79-38ab-4790-ae6d-39417a81be01", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"69df54a57798966d40e77b441ae75f0c79dd4885e6d715f472b6d2a79cbacd98", Pod:"calico-apiserver-5f5c9f9c9f-2f27r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43929a304c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.461 [INFO][8474] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.461 [INFO][8474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" iface="eth0" netns="" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.461 [INFO][8474] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.461 [INFO][8474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.479 [INFO][8495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.479 [INFO][8495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.479 [INFO][8495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.486 [WARNING][8495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.486 [INFO][8495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" HandleID="k8s-pod-network.628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--2f27r-eth0" Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.487 [INFO][8495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.489860 containerd[2699]: 2025-02-14 01:48:21.488 [INFO][8474] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d" Feb 14 01:48:21.490211 containerd[2699]: time="2025-02-14T01:48:21.489891314Z" level=info msg="TearDown network for sandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" successfully" Feb 14 01:48:21.491295 containerd[2699]: time="2025-02-14T01:48:21.491268274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:48:21.491325 containerd[2699]: time="2025-02-14T01:48:21.491317474Z" level=info msg="RemovePodSandbox \"628a62b75679bd28195365eed4147a8bb27597b9f1f413fbd7659a91b503b13d\" returns successfully" Feb 14 01:48:21.491670 containerd[2699]: time="2025-02-14T01:48:21.491647754Z" level=info msg="StopPodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\"" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.523 [WARNING][8525] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0", GenerateName:"calico-kube-controllers-67c94569b7-", Namespace:"calico-system", SelfLink:"", UID:"7a7c0e06-d39c-44c9-a08b-42e6e8f22180", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c94569b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32", Pod:"calico-kube-controllers-67c94569b7-pl64x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3426814b1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.523 [INFO][8525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.523 [INFO][8525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" iface="eth0" netns="" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.523 [INFO][8525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.523 [INFO][8525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.540 [INFO][8547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.541 [INFO][8547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.541 [INFO][8547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.548 [WARNING][8547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.548 [INFO][8547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.549 [INFO][8547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.551512 containerd[2699]: 2025-02-14 01:48:21.550 [INFO][8525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.551828 containerd[2699]: time="2025-02-14T01:48:21.551502452Z" level=info msg="TearDown network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" successfully" Feb 14 01:48:21.551828 containerd[2699]: time="2025-02-14T01:48:21.551522932Z" level=info msg="StopPodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" returns successfully" Feb 14 01:48:21.552039 containerd[2699]: time="2025-02-14T01:48:21.552017052Z" level=info msg="RemovePodSandbox for \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\"" Feb 14 01:48:21.552065 containerd[2699]: time="2025-02-14T01:48:21.552049852Z" level=info msg="Forcibly stopping sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\"" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.583 [WARNING][8580] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0", GenerateName:"calico-kube-controllers-67c94569b7-", Namespace:"calico-system", SelfLink:"", UID:"7a7c0e06-d39c-44c9-a08b-42e6e8f22180", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67c94569b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"645ebdb0bdb5d92b747209f1309f9c8eff5dd16ab9a73d05b5ffd04b0bb0bb32", Pod:"calico-kube-controllers-67c94569b7-pl64x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3426814b1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.584 [INFO][8580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.584 [INFO][8580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" iface="eth0" netns="" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.584 [INFO][8580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.584 [INFO][8580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.601 [INFO][8602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.601 [INFO][8602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.601 [INFO][8602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.608 [WARNING][8602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.608 [INFO][8602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" HandleID="k8s-pod-network.325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--kube--controllers--67c94569b7--pl64x-eth0" Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.609 [INFO][8602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.611637 containerd[2699]: 2025-02-14 01:48:21.610 [INFO][8580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e" Feb 14 01:48:21.611935 containerd[2699]: time="2025-02-14T01:48:21.611665830Z" level=info msg="TearDown network for sandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" successfully" Feb 14 01:48:21.613068 containerd[2699]: time="2025-02-14T01:48:21.613041350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:48:21.613133 containerd[2699]: time="2025-02-14T01:48:21.613095669Z" level=info msg="RemovePodSandbox \"325e64c740f49b463c9a92efdcc63419bf3ab4443635d4cb529d954b7727d19e\" returns successfully" Feb 14 01:48:21.613475 containerd[2699]: time="2025-02-14T01:48:21.613454229Z" level=info msg="StopPodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\"" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.646 [WARNING][8633] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ccf9c15-46bf-4f15-bf25-3067d3a65e25", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239", Pod:"calico-apiserver-5f5c9f9c9f-cnrft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid233b15ebd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.646 [INFO][8633] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.646 [INFO][8633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" iface="eth0" netns="" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.646 [INFO][8633] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.646 [INFO][8633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.664 [INFO][8658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.664 [INFO][8658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.664 [INFO][8658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.671 [WARNING][8658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.671 [INFO][8658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.672 [INFO][8658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.675128 containerd[2699]: 2025-02-14 01:48:21.673 [INFO][8633] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.675425 containerd[2699]: time="2025-02-14T01:48:21.675157207Z" level=info msg="TearDown network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" successfully" Feb 14 01:48:21.675425 containerd[2699]: time="2025-02-14T01:48:21.675176887Z" level=info msg="StopPodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" returns successfully" Feb 14 01:48:21.675545 containerd[2699]: time="2025-02-14T01:48:21.675523847Z" level=info msg="RemovePodSandbox for \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\"" Feb 14 01:48:21.675569 containerd[2699]: time="2025-02-14T01:48:21.675553407Z" level=info msg="Forcibly stopping sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\"" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.706 [WARNING][8695] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0", GenerateName:"calico-apiserver-5f5c9f9c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ccf9c15-46bf-4f15-bf25-3067d3a65e25", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 1, 47, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5c9f9c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-385c1ddb28", ContainerID:"de653c1aa35788e1e6955fadebce24cfcce656418c6042adbf035345d5baf239", Pod:"calico-apiserver-5f5c9f9c9f-cnrft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid233b15ebd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.707 [INFO][8695] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.707 [INFO][8695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" iface="eth0" netns="" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.707 [INFO][8695] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.707 [INFO][8695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.724 [INFO][8714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.724 [INFO][8714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.724 [INFO][8714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.731 [WARNING][8714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.731 [INFO][8714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" HandleID="k8s-pod-network.51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Workload="ci--4081.3.1--a--385c1ddb28-k8s-calico--apiserver--5f5c9f9c9f--cnrft-eth0" Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.732 [INFO][8714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 01:48:21.734687 containerd[2699]: 2025-02-14 01:48:21.733 [INFO][8695] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402" Feb 14 01:48:21.735084 containerd[2699]: time="2025-02-14T01:48:21.734711305Z" level=info msg="TearDown network for sandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" successfully" Feb 14 01:48:21.736156 containerd[2699]: time="2025-02-14T01:48:21.736128345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 01:48:21.736191 containerd[2699]: time="2025-02-14T01:48:21.736180505Z" level=info msg="RemovePodSandbox \"51e079474bd87e0ab3554914719ad6484cf5a957500fa701caefe53ddb023402\" returns successfully" Feb 14 01:48:22.286872 kubelet[4120]: I0214 01:48:22.286825 4120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 01:49:16.768437 systemd[1]: Started sshd@7-147.75.62.106:22-218.92.0.209:21556.service - OpenSSH per-connection server daemon (218.92.0.209:21556). Feb 14 01:49:16.989897 sshd[8872]: Unable to negotiate with 218.92.0.209 port 21556: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 14 01:49:16.991408 systemd[1]: sshd@7-147.75.62.106:22-218.92.0.209:21556.service: Deactivated successfully. Feb 14 01:57:18.930318 systemd[1]: Started sshd@8-147.75.62.106:22-139.178.68.195:53598.service - OpenSSH per-connection server daemon (139.178.68.195:53598). Feb 14 01:57:19.338763 sshd[10078]: Accepted publickey for core from 139.178.68.195 port 53598 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:19.339878 sshd[10078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:19.343766 systemd-logind[2681]: New session 10 of user core. Feb 14 01:57:19.351892 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 01:57:19.696122 sshd[10078]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:19.699617 systemd[1]: sshd@8-147.75.62.106:22-139.178.68.195:53598.service: Deactivated successfully. Feb 14 01:57:19.702014 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 01:57:19.702635 systemd-logind[2681]: Session 10 logged out. Waiting for processes to exit. Feb 14 01:57:19.703284 systemd-logind[2681]: Removed session 10. Feb 14 01:57:24.776388 systemd[1]: Started sshd@9-147.75.62.106:22-139.178.68.195:53610.service - OpenSSH per-connection server daemon (139.178.68.195:53610). Feb 14 01:57:25.185302 sshd[10143]: Accepted publickey for core from 139.178.68.195 port 53610 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:25.186367 sshd[10143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:25.189381 systemd-logind[2681]: New session 11 of user core. Feb 14 01:57:25.200845 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 01:57:25.542179 sshd[10143]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:25.545116 systemd[1]: sshd@9-147.75.62.106:22-139.178.68.195:53610.service: Deactivated successfully. Feb 14 01:57:25.547483 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 01:57:25.548022 systemd-logind[2681]: Session 11 logged out. Waiting for processes to exit. Feb 14 01:57:25.548659 systemd-logind[2681]: Removed session 11. Feb 14 01:57:25.617380 systemd[1]: Started sshd@10-147.75.62.106:22-139.178.68.195:53622.service - OpenSSH per-connection server daemon (139.178.68.195:53622). Feb 14 01:57:26.039165 sshd[10182]: Accepted publickey for core from 139.178.68.195 port 53622 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:26.040390 sshd[10182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:26.043362 systemd-logind[2681]: New session 12 of user core. Feb 14 01:57:26.058860 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 01:57:26.428535 sshd[10182]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:26.431461 systemd[1]: sshd@10-147.75.62.106:22-139.178.68.195:53622.service: Deactivated successfully. Feb 14 01:57:26.433176 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 01:57:26.433696 systemd-logind[2681]: Session 12 logged out. Waiting for processes to exit. Feb 14 01:57:26.434289 systemd-logind[2681]: Removed session 12. Feb 14 01:57:26.502226 systemd[1]: Started sshd@11-147.75.62.106:22-139.178.68.195:54326.service - OpenSSH per-connection server daemon (139.178.68.195:54326). Feb 14 01:57:26.907429 sshd[10238]: Accepted publickey for core from 139.178.68.195 port 54326 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:26.908503 sshd[10238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:26.911577 systemd-logind[2681]: New session 13 of user core. Feb 14 01:57:26.923909 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 01:57:27.259132 sshd[10238]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:27.262349 systemd[1]: sshd@11-147.75.62.106:22-139.178.68.195:54326.service: Deactivated successfully. Feb 14 01:57:27.264103 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 01:57:27.264644 systemd-logind[2681]: Session 13 logged out. Waiting for processes to exit. Feb 14 01:57:27.265205 systemd-logind[2681]: Removed session 13. Feb 14 01:57:32.335295 systemd[1]: Started sshd@12-147.75.62.106:22-139.178.68.195:54340.service - OpenSSH per-connection server daemon (139.178.68.195:54340). Feb 14 01:57:32.752398 sshd[10303]: Accepted publickey for core from 139.178.68.195 port 54340 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:32.753721 sshd[10303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:32.756757 systemd-logind[2681]: New session 14 of user core. Feb 14 01:57:32.765919 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 01:57:33.109739 sshd[10303]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:33.113186 systemd[1]: sshd@12-147.75.62.106:22-139.178.68.195:54340.service: Deactivated successfully. Feb 14 01:57:33.115435 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 01:57:33.115973 systemd-logind[2681]: Session 14 logged out. Waiting for processes to exit. Feb 14 01:57:33.116542 systemd-logind[2681]: Removed session 14. Feb 14 01:57:37.588254 update_engine[2691]: I20250214 01:57:37.588192 2691 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 14 01:57:37.588254 update_engine[2691]: I20250214 01:57:37.588248 2691 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 14 01:57:37.588679 update_engine[2691]: I20250214 01:57:37.588466 2691 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 14 01:57:37.588833 update_engine[2691]: I20250214 01:57:37.588817 2691 omaha_request_params.cc:62] Current group set to lts Feb 14 01:57:37.588908 update_engine[2691]: I20250214 01:57:37.588896 2691 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 14 01:57:37.588932 update_engine[2691]: I20250214 01:57:37.588905 2691 update_attempter.cc:643] Scheduling an action processor start. Feb 14 01:57:37.588932 update_engine[2691]: I20250214 01:57:37.588919 2691 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 14 01:57:37.588969 update_engine[2691]: I20250214 01:57:37.588944 2691 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 14 01:57:37.589005 update_engine[2691]: I20250214 01:57:37.588994 2691 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 14 01:57:37.589024 update_engine[2691]: I20250214 01:57:37.589002 2691 omaha_request_action.cc:272] Request: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: Feb 14 01:57:37.589024 update_engine[2691]: I20250214 01:57:37.589008 2691 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 01:57:37.589192 locksmithd[2720]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 14 01:57:37.589963 update_engine[2691]: I20250214 01:57:37.589946 2691 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 01:57:37.590208 update_engine[2691]: I20250214 01:57:37.590188 2691 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 01:57:37.590988 update_engine[2691]: E20250214 01:57:37.590970 2691 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 01:57:37.591033 update_engine[2691]: I20250214 01:57:37.591022 2691 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 14 01:57:38.183309 systemd[1]: Started sshd@13-147.75.62.106:22-139.178.68.195:47830.service - OpenSSH per-connection server daemon (139.178.68.195:47830). Feb 14 01:57:38.599724 sshd[10355]: Accepted publickey for core from 139.178.68.195 port 47830 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:38.600856 sshd[10355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:38.603997 systemd-logind[2681]: New session 15 of user core. Feb 14 01:57:38.621857 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 01:57:38.956571 sshd[10355]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:38.959622 systemd[1]: sshd@13-147.75.62.106:22-139.178.68.195:47830.service: Deactivated successfully. Feb 14 01:57:38.961987 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 01:57:38.963191 systemd-logind[2681]: Session 15 logged out. Waiting for processes to exit. Feb 14 01:57:38.963888 systemd-logind[2681]: Removed session 15. Feb 14 01:57:44.025393 systemd[1]: Started sshd@14-147.75.62.106:22-139.178.68.195:47846.service - OpenSSH per-connection server daemon (139.178.68.195:47846). Feb 14 01:57:44.416834 sshd[10394]: Accepted publickey for core from 139.178.68.195 port 47846 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:44.418119 sshd[10394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:44.421192 systemd-logind[2681]: New session 16 of user core. Feb 14 01:57:44.436846 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 01:57:44.759009 sshd[10394]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:44.761800 systemd[1]: sshd@14-147.75.62.106:22-139.178.68.195:47846.service: Deactivated successfully. Feb 14 01:57:44.763451 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 01:57:44.763976 systemd-logind[2681]: Session 16 logged out. Waiting for processes to exit. Feb 14 01:57:44.764525 systemd-logind[2681]: Removed session 16. Feb 14 01:57:44.835210 systemd[1]: Started sshd@15-147.75.62.106:22-139.178.68.195:47862.service - OpenSSH per-connection server daemon (139.178.68.195:47862). Feb 14 01:57:45.251280 sshd[10431]: Accepted publickey for core from 139.178.68.195 port 47862 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:45.252395 sshd[10431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:45.255392 systemd-logind[2681]: New session 17 of user core. Feb 14 01:57:45.266845 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 01:57:45.717648 sshd[10431]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:45.720477 systemd[1]: sshd@15-147.75.62.106:22-139.178.68.195:47862.service: Deactivated successfully. Feb 14 01:57:45.722122 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 01:57:45.722625 systemd-logind[2681]: Session 17 logged out. Waiting for processes to exit. Feb 14 01:57:45.723221 systemd-logind[2681]: Removed session 17. Feb 14 01:57:45.803139 systemd[1]: Started sshd@16-147.75.62.106:22-139.178.68.195:47864.service - OpenSSH per-connection server daemon (139.178.68.195:47864). Feb 14 01:57:46.231431 sshd[10465]: Accepted publickey for core from 139.178.68.195 port 47864 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:46.232512 sshd[10465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:46.235486 systemd-logind[2681]: New session 18 of user core. Feb 14 01:57:46.248838 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 01:57:47.584866 update_engine[2691]: I20250214 01:57:47.584815 2691 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 01:57:47.585182 update_engine[2691]: I20250214 01:57:47.585005 2691 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 01:57:47.585208 update_engine[2691]: I20250214 01:57:47.585186 2691 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 01:57:47.585813 update_engine[2691]: E20250214 01:57:47.585794 2691 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 01:57:47.585848 update_engine[2691]: I20250214 01:57:47.585836 2691 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 14 01:57:47.605492 sshd[10465]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:47.608416 systemd[1]: sshd@16-147.75.62.106:22-139.178.68.195:47864.service: Deactivated successfully. Feb 14 01:57:47.610077 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 01:57:47.610270 systemd[1]: session-18.scope: Consumed 3.828s CPU time. Feb 14 01:57:47.610604 systemd-logind[2681]: Session 18 logged out. Waiting for processes to exit. Feb 14 01:57:47.611207 systemd-logind[2681]: Removed session 18. Feb 14 01:57:47.678089 systemd[1]: Started sshd@17-147.75.62.106:22-139.178.68.195:57998.service - OpenSSH per-connection server daemon (139.178.68.195:57998). Feb 14 01:57:48.100851 sshd[10562]: Accepted publickey for core from 139.178.68.195 port 57998 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:48.101967 sshd[10562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:48.104858 systemd-logind[2681]: New session 19 of user core. Feb 14 01:57:48.117907 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 14 01:57:48.552272 sshd[10562]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:48.555168 systemd[1]: sshd@17-147.75.62.106:22-139.178.68.195:57998.service: Deactivated successfully. Feb 14 01:57:48.556802 systemd[1]: session-19.scope: Deactivated successfully. Feb 14 01:57:48.557339 systemd-logind[2681]: Session 19 logged out. Waiting for processes to exit. Feb 14 01:57:48.557965 systemd-logind[2681]: Removed session 19. Feb 14 01:57:48.623210 systemd[1]: Started sshd@18-147.75.62.106:22-139.178.68.195:58014.service - OpenSSH per-connection server daemon (139.178.68.195:58014). Feb 14 01:57:49.026192 sshd[10617]: Accepted publickey for core from 139.178.68.195 port 58014 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:49.027287 sshd[10617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:49.030120 systemd-logind[2681]: New session 20 of user core. Feb 14 01:57:49.039851 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 14 01:57:49.375607 sshd[10617]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:49.378384 systemd[1]: sshd@18-147.75.62.106:22-139.178.68.195:58014.service: Deactivated successfully. Feb 14 01:57:49.380027 systemd[1]: session-20.scope: Deactivated successfully. Feb 14 01:57:49.380505 systemd-logind[2681]: Session 20 logged out. Waiting for processes to exit. Feb 14 01:57:49.381089 systemd-logind[2681]: Removed session 20. Feb 14 01:57:54.454331 systemd[1]: Started sshd@19-147.75.62.106:22-139.178.68.195:58028.service - OpenSSH per-connection server daemon (139.178.68.195:58028). Feb 14 01:57:54.858287 sshd[10658]: Accepted publickey for core from 139.178.68.195 port 58028 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:57:54.859494 sshd[10658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:57:54.862581 systemd-logind[2681]: New session 21 of user core. Feb 14 01:57:54.872909 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 14 01:57:55.203922 sshd[10658]: pam_unix(sshd:session): session closed for user core Feb 14 01:57:55.206846 systemd[1]: sshd@19-147.75.62.106:22-139.178.68.195:58028.service: Deactivated successfully. Feb 14 01:57:55.208368 systemd[1]: session-21.scope: Deactivated successfully. Feb 14 01:57:55.208881 systemd-logind[2681]: Session 21 logged out. Waiting for processes to exit. Feb 14 01:57:55.209435 systemd-logind[2681]: Removed session 21. Feb 14 01:57:57.586051 update_engine[2691]: I20250214 01:57:57.585976 2691 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 01:57:57.586447 update_engine[2691]: I20250214 01:57:57.586222 2691 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 01:57:57.586472 update_engine[2691]: I20250214 01:57:57.586445 2691 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 01:57:57.587136 update_engine[2691]: E20250214 01:57:57.587116 2691 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 01:57:57.587172 update_engine[2691]: I20250214 01:57:57.587160 2691 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 14 01:58:00.282264 systemd[1]: Started sshd@20-147.75.62.106:22-139.178.68.195:57828.service - OpenSSH per-connection server daemon (139.178.68.195:57828). Feb 14 01:58:00.687823 sshd[10745]: Accepted publickey for core from 139.178.68.195 port 57828 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:58:00.688921 sshd[10745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:58:00.691772 systemd-logind[2681]: New session 22 of user core. Feb 14 01:58:00.704851 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 14 01:58:01.036437 sshd[10745]: pam_unix(sshd:session): session closed for user core Feb 14 01:58:01.039270 systemd[1]: sshd@20-147.75.62.106:22-139.178.68.195:57828.service: Deactivated successfully. Feb 14 01:58:01.040976 systemd[1]: session-22.scope: Deactivated successfully. Feb 14 01:58:01.041527 systemd-logind[2681]: Session 22 logged out. Waiting for processes to exit. Feb 14 01:58:01.042108 systemd-logind[2681]: Removed session 22. Feb 14 01:58:06.117308 systemd[1]: Started sshd@21-147.75.62.106:22-139.178.68.195:57842.service - OpenSSH per-connection server daemon (139.178.68.195:57842). Feb 14 01:58:06.538031 sshd[10783]: Accepted publickey for core from 139.178.68.195 port 57842 ssh2: RSA SHA256:aR453Z1bN6Bo44cOSLYOTBQ5gip+izhqkNhSTvu+K8g Feb 14 01:58:06.539068 sshd[10783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 01:58:06.541829 systemd-logind[2681]: New session 23 of user core. Feb 14 01:58:06.550889 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 14 01:58:06.899211 sshd[10783]: pam_unix(sshd:session): session closed for user core Feb 14 01:58:06.902094 systemd[1]: sshd@21-147.75.62.106:22-139.178.68.195:57842.service: Deactivated successfully. Feb 14 01:58:06.904361 systemd[1]: session-23.scope: Deactivated successfully. Feb 14 01:58:06.904867 systemd-logind[2681]: Session 23 logged out. Waiting for processes to exit. Feb 14 01:58:06.905417 systemd-logind[2681]: Removed session 23. Feb 14 01:58:07.583523 update_engine[2691]: I20250214 01:58:07.583468 2691 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 01:58:07.583859 update_engine[2691]: I20250214 01:58:07.583766 2691 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 01:58:07.583991 update_engine[2691]: I20250214 01:58:07.583973 2691 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 01:58:07.584518 update_engine[2691]: E20250214 01:58:07.584503 2691 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 01:58:07.584552 update_engine[2691]: I20250214 01:58:07.584542 2691 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 14 01:58:07.584571 update_engine[2691]: I20250214 01:58:07.584550 2691 omaha_request_action.cc:617] Omaha request response: Feb 14 01:58:07.584635 update_engine[2691]: E20250214 01:58:07.584623 2691 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 14 01:58:07.584656 update_engine[2691]: I20250214 01:58:07.584641 2691 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 14 01:58:07.584656 update_engine[2691]: I20250214 01:58:07.584647 2691 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 14 01:58:07.584656 update_engine[2691]: I20250214 01:58:07.584652 2691 update_attempter.cc:306] Processing Done. Feb 14 01:58:07.584710 update_engine[2691]: E20250214 01:58:07.584666 2691 update_attempter.cc:619] Update failed. Feb 14 01:58:07.584710 update_engine[2691]: I20250214 01:58:07.584671 2691 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 14 01:58:07.584710 update_engine[2691]: I20250214 01:58:07.584676 2691 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 14 01:58:07.584710 update_engine[2691]: I20250214 01:58:07.584679 2691 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 14 01:58:07.584794 update_engine[2691]: I20250214 01:58:07.584728 2691 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 14 01:58:07.584794 update_engine[2691]: I20250214 01:58:07.584756 2691 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 14 01:58:07.584794 update_engine[2691]: I20250214 01:58:07.584764 2691 omaha_request_action.cc:272] Request: Feb 14 01:58:07.584794 update_engine[2691]: Feb 14 01:58:07.584794 update_engine[2691]: Feb 14 01:58:07.584794 update_engine[2691]: Feb 14 01:58:07.584794 update_engine[2691]: Feb 14 01:58:07.584794 update_engine[2691]: Feb 14 01:58:07.584794 update_engine[2691]: Feb 14 01:58:07.584794 update_engine[2691]: I20250214 01:58:07.584769 2691 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 01:58:07.584964 update_engine[2691]: I20250214 01:58:07.584880 2691 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 01:58:07.584986 locksmithd[2720]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 14 01:58:07.585144 update_engine[2691]: I20250214 01:58:07.585021 2691 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 01:58:07.585601 update_engine[2691]: E20250214 01:58:07.585583 2691 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 01:58:07.585632 update_engine[2691]: I20250214 01:58:07.585621 2691 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 14 01:58:07.585654 update_engine[2691]: I20250214 01:58:07.585629 2691 omaha_request_action.cc:617] Omaha request response: Feb 14 01:58:07.585654 update_engine[2691]: I20250214 01:58:07.585635 2691 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 14 01:58:07.585654 update_engine[2691]: I20250214 01:58:07.585639 2691 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 14 01:58:07.585654 update_engine[2691]: I20250214 01:58:07.585644 2691 update_attempter.cc:306] Processing Done. Feb 14 01:58:07.585654 update_engine[2691]: I20250214 01:58:07.585649 2691 update_attempter.cc:310] Error event sent. Feb 14 01:58:07.585755 update_engine[2691]: I20250214 01:58:07.585656 2691 update_check_scheduler.cc:74] Next update check in 47m19s Feb 14 01:58:07.585850 locksmithd[2720]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0