May 15 01:17:25.159801 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 15 01:17:25.159825 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 15 01:17:25.159834 kernel: KASLR enabled May 15 01:17:25.159839 kernel: efi: EFI v2.7 by American Megatrends May 15 01:17:25.159845 kernel: efi: ACPI 2.0=0xec090000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea49f818 RNG=0xebf20018 MEMRESERVE=0xe467bf98 May 15 01:17:25.159850 kernel: random: crng init done May 15 01:17:25.159875 kernel: secureboot: Secure boot disabled May 15 01:17:25.159881 kernel: esrt: Reserving ESRT space from 0x00000000ea49f818 to 0x00000000ea49f878. May 15 01:17:25.159889 kernel: ACPI: Early table checksum verification disabled May 15 01:17:25.159895 kernel: ACPI: RSDP 0x00000000EC090000 000024 (v02 Ampere) May 15 01:17:25.159901 kernel: ACPI: XSDT 0x00000000EC080000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 15 01:17:25.159907 kernel: ACPI: FACP 0x00000000EC060000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 15 01:17:25.159912 kernel: ACPI: DSDT 0x00000000EC000000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 15 01:17:25.159918 kernel: ACPI: DBG2 0x00000000EC070000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 15 01:17:25.159927 kernel: ACPI: GTDT 0x00000000EC050000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 15 01:17:25.159933 kernel: ACPI: SSDT 0x00000000EC040000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 15 01:17:25.159939 kernel: ACPI: FIDT 0x00000000EBFF0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 15 01:17:25.159945 kernel: ACPI: SPCR 0x00000000EBFE0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 15 01:17:25.159952 kernel: ACPI: BGRT 0x00000000EBFD0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 15 01:17:25.159958 kernel: ACPI: MCFG 0x00000000EBFC0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 15 01:17:25.159964 kernel: ACPI: IORT 0x00000000EBFB0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 15 01:17:25.159970 kernel: ACPI: PPTT 0x00000000EBF90000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 15 01:17:25.159976 kernel: ACPI: SLIT 0x00000000EBF80000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 15 01:17:25.159983 kernel: ACPI: SRAT 0x00000000EBF70000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 15 01:17:25.159990 kernel: ACPI: APIC 0x00000000EBFA0000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 15 01:17:25.159997 kernel: ACPI: PCCT 0x00000000EBF50000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 15 01:17:25.160003 kernel: ACPI: WSMT 0x00000000EBF40000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 15 01:17:25.160009 kernel: ACPI: FPDT 0x00000000EBF30000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 15 01:17:25.160015 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 15 01:17:25.160021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 15 01:17:25.160028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 15 01:17:25.160034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 15 01:17:25.160040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 15 01:17:25.160046 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 15 01:17:25.160052 kernel: Zone ranges: May 15 01:17:25.160060 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 15 01:17:25.160066 kernel: DMA32 empty May 15 01:17:25.160072 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 15 01:17:25.160078 kernel: Movable zone start for each node May 15 01:17:25.160084 kernel: Early memory node ranges May 15 01:17:25.160093 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 15 01:17:25.160100 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 15 01:17:25.160108 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 15 01:17:25.160114 kernel: node 0: [mem 0x0000000094000000-0x00000000eba47fff] May 15 01:17:25.160121 kernel: node 0: [mem 0x00000000eba48000-0x00000000ebbc5fff] May 15 01:17:25.160127 kernel: node 0: [mem 0x00000000ebbc6000-0x00000000ebbc6fff] May 15 01:17:25.160134 kernel: node 0: [mem 0x00000000ebbc7000-0x00000000ebedffff] May 15 01:17:25.160140 kernel: node 0: [mem 0x00000000ebee0000-0x00000000ec0fffff] May 15 01:17:25.160146 kernel: node 0: [mem 0x00000000ec100000-0x00000000ec10ffff] May 15 01:17:25.160153 kernel: node 0: [mem 0x00000000ec110000-0x00000000ee53ffff] May 15 01:17:25.160159 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 15 01:17:25.160166 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 15 01:17:25.160174 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 15 01:17:25.160180 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 15 01:17:25.160186 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 15 01:17:25.160193 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 15 01:17:25.160199 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 15 01:17:25.160206 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 15 01:17:25.160212 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 15 01:17:25.160219 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 15 01:17:25.160225 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 15 01:17:25.160232 kernel: psci: probing for conduit method from ACPI. May 15 01:17:25.160238 kernel: psci: PSCIv1.1 detected in firmware. May 15 01:17:25.160246 kernel: psci: Using standard PSCI v0.2 function IDs May 15 01:17:25.160252 kernel: psci: MIGRATE_INFO_TYPE not supported. May 15 01:17:25.160259 kernel: psci: SMC Calling Convention v1.2 May 15 01:17:25.160265 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 15 01:17:25.160272 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 15 01:17:25.160278 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 15 01:17:25.160284 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 15 01:17:25.160291 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 15 01:17:25.160297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 15 01:17:25.160304 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 15 01:17:25.160310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 15 01:17:25.160317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 15 01:17:25.160324 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 15 01:17:25.160331 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 15 01:17:25.160337 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 15 01:17:25.160343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 15 01:17:25.160350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 15 01:17:25.160356 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 15 01:17:25.160363 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 15 01:17:25.160369 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 15 01:17:25.160376 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 15 01:17:25.160382 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 15 01:17:25.160389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 15 01:17:25.160395 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 15 01:17:25.160403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 15 01:17:25.160409 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 15 01:17:25.160416 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 15 01:17:25.160422 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 15 01:17:25.160429 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 15 01:17:25.160435 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 15 01:17:25.160441 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 15 01:17:25.160448 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 15 01:17:25.160454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 15 01:17:25.160461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 15 01:17:25.160467 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 15 01:17:25.160475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 15 01:17:25.160482 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 15 01:17:25.160488 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 15 01:17:25.160494 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 15 01:17:25.160501 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 15 01:17:25.160507 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 15 01:17:25.160514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 15 01:17:25.160520 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 15 01:17:25.160527 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 15 01:17:25.160533 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 15 01:17:25.160539 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 15 01:17:25.160546 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 15 01:17:25.160553 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 15 01:17:25.160560 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 15 01:17:25.160567 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 15 01:17:25.160573 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 15 01:17:25.160580 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 15 01:17:25.160586 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 15 01:17:25.160593 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 15 01:17:25.160599 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 15 01:17:25.160611 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 15 01:17:25.160618 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 15 01:17:25.160627 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 15 01:17:25.160633 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 15 01:17:25.160640 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 15 01:17:25.160647 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 15 01:17:25.160654 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 15 01:17:25.160661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 15 01:17:25.160669 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 15 01:17:25.160676 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 15 01:17:25.160683 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 15 01:17:25.160690 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 15 01:17:25.160697 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 15 01:17:25.160704 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 15 01:17:25.160711 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 15 01:17:25.160717 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 15 01:17:25.160724 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 15 01:17:25.160731 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 15 01:17:25.160738 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 15 01:17:25.160745 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 15 01:17:25.160753 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 15 01:17:25.160760 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 15 01:17:25.160767 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 15 01:17:25.160774 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 15 01:17:25.160781 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 15 01:17:25.160788 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 15 01:17:25.160795 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 15 01:17:25.160802 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 15 01:17:25.160808 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 01:17:25.160815 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 01:17:25.160823 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 15 01:17:25.160831 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 15 01:17:25.160838 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 15 01:17:25.160844 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 15 01:17:25.160851 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 15 01:17:25.160887 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 15 01:17:25.160894 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 15 01:17:25.160901 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 15 01:17:25.160908 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 15 01:17:25.160914 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 15 01:17:25.160921 kernel: Detected PIPT I-cache on CPU0 May 15 01:17:25.160928 kernel: CPU features: detected: GIC system register CPU interface May 15 01:17:25.160937 kernel: CPU features: detected: Virtualization Host Extensions May 15 01:17:25.160944 kernel: CPU features: detected: Hardware dirty bit management May 15 01:17:25.160951 kernel: CPU features: detected: Spectre-v4 May 15 01:17:25.160958 kernel: CPU features: detected: Spectre-BHB May 15 01:17:25.160965 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 01:17:25.160972 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 01:17:25.160979 kernel: CPU features: detected: ARM erratum 1418040 May 15 01:17:25.160986 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 01:17:25.160992 kernel: alternatives: applying boot alternatives May 15 01:17:25.161001 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 15 01:17:25.161008 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 01:17:25.161016 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 15 01:17:25.161023 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 15 01:17:25.161030 kernel: printk: log_buf_len min size: 262144 bytes May 15 01:17:25.161037 kernel: printk: log_buf_len: 1048576 bytes May 15 01:17:25.161044 kernel: printk: early log buf free: 249992(95%) May 15 01:17:25.161051 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 15 01:17:25.161058 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 15 01:17:25.161065 kernel: Fallback order for Node 0: 0 May 15 01:17:25.161072 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 15 01:17:25.161078 kernel: Policy zone: Normal May 15 01:17:25.161085 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 01:17:25.161093 kernel: software IO TLB: area num 128. May 15 01:17:25.161101 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 15 01:17:25.161108 kernel: Memory: 262923416K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 5250920K reserved, 0K cma-reserved) May 15 01:17:25.161115 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 15 01:17:25.161122 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 01:17:25.161129 kernel: rcu: RCU event tracing is enabled. May 15 01:17:25.161136 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 15 01:17:25.161144 kernel: Trampoline variant of Tasks RCU enabled. May 15 01:17:25.161151 kernel: Tracing variant of Tasks RCU enabled. May 15 01:17:25.161158 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 01:17:25.161165 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 15 01:17:25.161173 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 01:17:25.161180 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 15 01:17:25.161187 kernel: GICv3: 672 SPIs implemented May 15 01:17:25.161194 kernel: GICv3: 0 Extended SPIs implemented May 15 01:17:25.161201 kernel: Root IRQ handler: gic_handle_irq May 15 01:17:25.161207 kernel: GICv3: GICv3 features: 16 PPIs May 15 01:17:25.161214 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 15 01:17:25.161221 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 15 01:17:25.161228 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 15 01:17:25.161235 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 15 01:17:25.161242 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 15 01:17:25.161249 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 15 01:17:25.161255 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 15 01:17:25.161264 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 15 01:17:25.161270 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 15 01:17:25.161277 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 15 01:17:25.161284 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161291 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161298 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 15 01:17:25.161305 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161312 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161319 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 15 01:17:25.161326 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161333 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161341 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 15 01:17:25.161348 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161355 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161362 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 15 01:17:25.161369 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161376 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161383 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 15 01:17:25.161390 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161397 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161404 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 15 01:17:25.161411 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161420 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161426 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 15 01:17:25.161433 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 15 01:17:25.161441 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 15 01:17:25.161448 kernel: GICv3: using LPI property table @0x00000800003e0000 May 15 01:17:25.161455 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 15 01:17:25.161462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 01:17:25.161469 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161476 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 15 01:17:25.161483 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 15 01:17:25.161490 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 01:17:25.161498 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 01:17:25.161505 kernel: Console: colour dummy device 80x25 May 15 01:17:25.161512 kernel: printk: console [tty0] enabled May 15 01:17:25.161519 kernel: ACPI: Core revision 20230628 May 15 01:17:25.161527 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 01:17:25.161534 kernel: pid_max: default: 81920 minimum: 640 May 15 01:17:25.161541 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 01:17:25.161548 kernel: landlock: Up and running. May 15 01:17:25.161555 kernel: SELinux: Initializing. May 15 01:17:25.161563 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 01:17:25.161571 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 01:17:25.161578 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 15 01:17:25.161586 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 15 01:17:25.161593 kernel: rcu: Hierarchical SRCU implementation. May 15 01:17:25.161600 kernel: rcu: Max phase no-delay instances is 400. May 15 01:17:25.161607 kernel: Platform MSI: ITS@0x100100040000 domain created May 15 01:17:25.161614 kernel: Platform MSI: ITS@0x100100060000 domain created May 15 01:17:25.161621 kernel: Platform MSI: ITS@0x100100080000 domain created May 15 01:17:25.161628 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 15 01:17:25.161637 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 15 01:17:25.161644 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 15 01:17:25.161651 kernel: Platform MSI: ITS@0x100100100000 domain created May 15 01:17:25.161658 kernel: Platform MSI: ITS@0x100100120000 domain created May 15 01:17:25.161665 kernel: PCI/MSI: ITS@0x100100040000 domain created May 15 01:17:25.161672 kernel: PCI/MSI: ITS@0x100100060000 domain created May 15 01:17:25.161679 kernel: PCI/MSI: ITS@0x100100080000 domain created May 15 01:17:25.161686 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 15 01:17:25.161693 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 15 01:17:25.161701 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 15 01:17:25.161708 kernel: PCI/MSI: ITS@0x100100100000 domain created May 15 01:17:25.161715 kernel: PCI/MSI: ITS@0x100100120000 domain created May 15 01:17:25.161722 kernel: Remapping and enabling EFI services. May 15 01:17:25.161729 kernel: smp: Bringing up secondary CPUs ... May 15 01:17:25.161737 kernel: Detected PIPT I-cache on CPU1 May 15 01:17:25.161744 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 15 01:17:25.161751 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 15 01:17:25.161758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161766 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 15 01:17:25.161773 kernel: Detected PIPT I-cache on CPU2 May 15 01:17:25.161780 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 15 01:17:25.161787 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 15 01:17:25.161795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161802 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 15 01:17:25.161809 kernel: Detected PIPT I-cache on CPU3 May 15 01:17:25.161816 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 15 01:17:25.161823 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 15 01:17:25.161830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161838 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 15 01:17:25.161845 kernel: Detected PIPT I-cache on CPU4 May 15 01:17:25.161852 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 15 01:17:25.161862 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 15 01:17:25.161869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161876 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 15 01:17:25.161883 kernel: Detected PIPT I-cache on CPU5 May 15 01:17:25.161890 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 15 01:17:25.161897 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 15 01:17:25.161906 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161913 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 15 01:17:25.161920 kernel: Detected PIPT I-cache on CPU6 May 15 01:17:25.161927 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 15 01:17:25.161935 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 15 01:17:25.161942 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161949 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 15 01:17:25.161956 kernel: Detected PIPT I-cache on CPU7 May 15 01:17:25.161963 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 15 01:17:25.161970 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 15 01:17:25.161979 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.161986 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 15 01:17:25.161993 kernel: Detected PIPT I-cache on CPU8 May 15 01:17:25.162000 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 15 01:17:25.162007 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 15 01:17:25.162015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162021 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 15 01:17:25.162029 kernel: Detected PIPT I-cache on CPU9 May 15 01:17:25.162036 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 15 01:17:25.162045 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 15 01:17:25.162052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162059 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 15 01:17:25.162066 kernel: Detected PIPT I-cache on CPU10 May 15 01:17:25.162073 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 15 01:17:25.162080 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 15 01:17:25.162087 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162094 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 15 01:17:25.162101 kernel: Detected PIPT I-cache on CPU11 May 15 01:17:25.162109 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 15 01:17:25.162117 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 15 01:17:25.162124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162131 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 15 01:17:25.162138 kernel: Detected PIPT I-cache on CPU12 May 15 01:17:25.162145 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 15 01:17:25.162152 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 15 01:17:25.162159 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162166 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 15 01:17:25.162173 kernel: Detected PIPT I-cache on CPU13 May 15 01:17:25.162181 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 15 01:17:25.162189 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 15 01:17:25.162196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162204 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 15 01:17:25.162211 kernel: Detected PIPT I-cache on CPU14 May 15 01:17:25.162218 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 15 01:17:25.162225 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 15 01:17:25.162232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162239 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 15 01:17:25.162246 kernel: Detected PIPT I-cache on CPU15 May 15 01:17:25.162255 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 15 01:17:25.162262 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 15 01:17:25.162269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162276 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 15 01:17:25.162283 kernel: Detected PIPT I-cache on CPU16 May 15 01:17:25.162290 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 15 01:17:25.162297 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 15 01:17:25.162304 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162312 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 15 01:17:25.162328 kernel: Detected PIPT I-cache on CPU17 May 15 01:17:25.162337 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 15 01:17:25.162345 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 15 01:17:25.162352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162360 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 15 01:17:25.162367 kernel: Detected PIPT I-cache on CPU18 May 15 01:17:25.162374 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 15 01:17:25.162382 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 15 01:17:25.162389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162398 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 15 01:17:25.162405 kernel: Detected PIPT I-cache on CPU19 May 15 01:17:25.162413 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 15 01:17:25.162420 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 15 01:17:25.162428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162435 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 15 01:17:25.162442 kernel: Detected PIPT I-cache on CPU20 May 15 01:17:25.162451 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 15 01:17:25.162459 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 15 01:17:25.162466 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162473 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 15 01:17:25.162481 kernel: Detected PIPT I-cache on CPU21 May 15 01:17:25.162488 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 15 01:17:25.162496 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 15 01:17:25.162503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162511 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 15 01:17:25.162519 kernel: Detected PIPT I-cache on CPU22 May 15 01:17:25.162527 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 15 01:17:25.162534 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 15 01:17:25.162543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162550 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 15 01:17:25.162559 kernel: Detected PIPT I-cache on CPU23 May 15 01:17:25.162567 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 15 01:17:25.162574 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 15 01:17:25.162582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162590 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 15 01:17:25.162598 kernel: Detected PIPT I-cache on CPU24 May 15 01:17:25.162605 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 15 01:17:25.162613 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 15 01:17:25.162620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162627 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 15 01:17:25.162635 kernel: Detected PIPT I-cache on CPU25 May 15 01:17:25.162642 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 15 01:17:25.162650 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 15 01:17:25.162659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162666 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 15 01:17:25.162673 kernel: Detected PIPT I-cache on CPU26 May 15 01:17:25.162681 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 15 01:17:25.162688 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 15 01:17:25.162696 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162703 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 15 01:17:25.162710 kernel: Detected PIPT I-cache on CPU27 May 15 01:17:25.162718 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 15 01:17:25.162725 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 15 01:17:25.162734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162742 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 15 01:17:25.162749 kernel: Detected PIPT I-cache on CPU28 May 15 01:17:25.162756 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 15 01:17:25.162764 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 15 01:17:25.162771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162779 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 15 01:17:25.162786 kernel: Detected PIPT I-cache on CPU29 May 15 01:17:25.162794 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 15 01:17:25.162803 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 15 01:17:25.162810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162817 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 15 01:17:25.162825 kernel: Detected PIPT I-cache on CPU30 May 15 01:17:25.162832 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 15 01:17:25.162840 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 15 01:17:25.162847 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162858 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 15 01:17:25.162866 kernel: Detected PIPT I-cache on CPU31 May 15 01:17:25.162873 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 15 01:17:25.162883 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 15 01:17:25.162890 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162897 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 15 01:17:25.162905 kernel: Detected PIPT I-cache on CPU32 May 15 01:17:25.162912 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 15 01:17:25.162920 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 15 01:17:25.162927 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162935 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 15 01:17:25.162942 kernel: Detected PIPT I-cache on CPU33 May 15 01:17:25.162951 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 15 01:17:25.162958 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 15 01:17:25.162966 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.162973 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 15 01:17:25.162980 kernel: Detected PIPT I-cache on CPU34 May 15 01:17:25.162988 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 15 01:17:25.162995 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 15 01:17:25.163003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163010 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 15 01:17:25.163018 kernel: Detected PIPT I-cache on CPU35 May 15 01:17:25.163026 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 15 01:17:25.163034 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 15 01:17:25.163041 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163048 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 15 01:17:25.163056 kernel: Detected PIPT I-cache on CPU36 May 15 01:17:25.163063 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 15 01:17:25.163072 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 15 01:17:25.163080 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163087 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 15 01:17:25.163095 kernel: Detected PIPT I-cache on CPU37 May 15 01:17:25.163103 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 15 01:17:25.163110 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 15 01:17:25.163118 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163125 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 15 01:17:25.163132 kernel: Detected PIPT I-cache on CPU38 May 15 01:17:25.163140 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 15 01:17:25.163147 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 15 01:17:25.163155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163162 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 15 01:17:25.163171 kernel: Detected PIPT I-cache on CPU39 May 15 01:17:25.163178 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 15 01:17:25.163186 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 15 01:17:25.163193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163201 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 15 01:17:25.163208 kernel: Detected PIPT I-cache on CPU40 May 15 01:17:25.163216 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 15 01:17:25.163223 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 15 01:17:25.163232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163240 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 15 01:17:25.163247 kernel: Detected PIPT I-cache on CPU41 May 15 01:17:25.163254 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 15 01:17:25.163262 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 15 01:17:25.163269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163277 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 15 01:17:25.163284 kernel: Detected PIPT I-cache on CPU42 May 15 01:17:25.163291 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 15 01:17:25.163300 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 15 01:17:25.163308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163315 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 15 01:17:25.163323 kernel: Detected PIPT I-cache on CPU43 May 15 01:17:25.163330 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 15 01:17:25.163338 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 15 01:17:25.163345 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163352 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 15 01:17:25.163360 kernel: Detected PIPT I-cache on CPU44 May 15 01:17:25.163367 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 15 01:17:25.163376 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 15 01:17:25.163384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163391 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 15 01:17:25.163398 kernel: Detected PIPT I-cache on CPU45 May 15 01:17:25.163406 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 15 01:17:25.163413 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 15 01:17:25.163421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163428 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 15 01:17:25.163436 kernel: Detected PIPT I-cache on CPU46 May 15 01:17:25.163445 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 15 01:17:25.163452 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 15 01:17:25.163459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163467 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 15 01:17:25.163474 kernel: Detected PIPT I-cache on CPU47 May 15 01:17:25.163482 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 15 01:17:25.163489 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 15 01:17:25.163497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163504 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 15 01:17:25.163513 kernel: Detected PIPT I-cache on CPU48 May 15 01:17:25.163522 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 15 01:17:25.163529 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 15 01:17:25.163537 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163544 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 15 01:17:25.163552 kernel: Detected PIPT I-cache on CPU49 May 15 01:17:25.163559 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 15 01:17:25.163567 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 15 01:17:25.163574 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163581 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 15 01:17:25.163590 kernel: Detected PIPT I-cache on CPU50 May 15 01:17:25.163598 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 15 01:17:25.163605 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 15 01:17:25.163613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163620 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 15 01:17:25.163627 kernel: Detected PIPT I-cache on CPU51 May 15 01:17:25.163635 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 15 01:17:25.163642 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 15 01:17:25.163650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163658 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 15 01:17:25.163666 kernel: Detected PIPT I-cache on CPU52 May 15 01:17:25.163674 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 15 01:17:25.163681 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 15 01:17:25.163689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163696 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 15 01:17:25.163703 kernel: Detected PIPT I-cache on CPU53 May 15 01:17:25.163711 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 15 01:17:25.163718 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 15 01:17:25.163726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163734 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 15 01:17:25.163742 kernel: Detected PIPT I-cache on CPU54 May 15 01:17:25.163749 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 15 01:17:25.163757 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 15 01:17:25.163764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163771 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 15 01:17:25.163781 kernel: Detected PIPT I-cache on CPU55 May 15 01:17:25.163788 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 15 01:17:25.163796 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 15 01:17:25.163805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163812 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 15 01:17:25.163820 kernel: Detected PIPT I-cache on CPU56 May 15 01:17:25.163827 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 15 01:17:25.163834 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 15 01:17:25.163842 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163849 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 15 01:17:25.163876 kernel: Detected PIPT I-cache on CPU57 May 15 01:17:25.163884 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 15 01:17:25.163891 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 15 01:17:25.163900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163908 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 15 01:17:25.163915 kernel: Detected PIPT I-cache on CPU58 May 15 01:17:25.163923 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 15 01:17:25.163930 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 15 01:17:25.163938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163945 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 15 01:17:25.163953 kernel: Detected PIPT I-cache on CPU59 May 15 01:17:25.163960 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 15 01:17:25.163969 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 15 01:17:25.163976 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.163984 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 15 01:17:25.163991 kernel: Detected PIPT I-cache on CPU60 May 15 01:17:25.163999 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 15 01:17:25.164006 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 15 01:17:25.164014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164021 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 15 01:17:25.164028 kernel: Detected PIPT I-cache on CPU61 May 15 01:17:25.164036 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 15 01:17:25.164045 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 15 01:17:25.164052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164059 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 15 01:17:25.164067 kernel: Detected PIPT I-cache on CPU62 May 15 01:17:25.164074 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 15 01:17:25.164082 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 15 01:17:25.164090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164097 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 15 01:17:25.164104 kernel: Detected PIPT I-cache on CPU63 May 15 01:17:25.164113 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 15 01:17:25.164121 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 15 01:17:25.164128 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164136 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 15 01:17:25.164143 kernel: Detected PIPT I-cache on CPU64 May 15 01:17:25.164151 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 15 01:17:25.164158 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 15 01:17:25.164166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164173 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 15 01:17:25.164182 kernel: Detected PIPT I-cache on CPU65 May 15 01:17:25.164189 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 15 01:17:25.164197 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 15 01:17:25.164204 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164212 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 15 01:17:25.164219 kernel: Detected PIPT I-cache on CPU66 May 15 01:17:25.164226 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 15 01:17:25.164234 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 15 01:17:25.164242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164249 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 15 01:17:25.164258 kernel: Detected PIPT I-cache on CPU67 May 15 01:17:25.164265 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 15 01:17:25.164272 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 15 01:17:25.164280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164287 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 15 01:17:25.164295 kernel: Detected PIPT I-cache on CPU68 May 15 01:17:25.164302 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 15 01:17:25.164310 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 15 01:17:25.164317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164326 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 15 01:17:25.164333 kernel: Detected PIPT I-cache on CPU69 May 15 01:17:25.164341 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 15 01:17:25.164348 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 15 01:17:25.164356 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164363 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 15 01:17:25.164370 kernel: Detected PIPT I-cache on CPU70 May 15 01:17:25.164378 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 15 01:17:25.164386 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 15 01:17:25.164393 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164402 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 15 01:17:25.164409 kernel: Detected PIPT I-cache on CPU71 May 15 01:17:25.164417 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 15 01:17:25.164424 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 15 01:17:25.164431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164439 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 15 01:17:25.164446 kernel: Detected PIPT I-cache on CPU72 May 15 01:17:25.164454 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 15 01:17:25.164462 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 15 01:17:25.164470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164478 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 15 01:17:25.164485 kernel: Detected PIPT I-cache on CPU73 May 15 01:17:25.164493 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 15 01:17:25.164500 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 15 01:17:25.164508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164515 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 15 01:17:25.164522 kernel: Detected PIPT I-cache on CPU74 May 15 01:17:25.164530 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 15 01:17:25.164537 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 15 01:17:25.164546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164554 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 15 01:17:25.164561 kernel: Detected PIPT I-cache on CPU75 May 15 01:17:25.164569 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 15 01:17:25.164576 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 15 01:17:25.164584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164591 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 15 01:17:25.164598 kernel: Detected PIPT I-cache on CPU76 May 15 01:17:25.164606 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 15 01:17:25.164615 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 15 01:17:25.164622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164630 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 15 01:17:25.164637 kernel: Detected PIPT I-cache on CPU77 May 15 01:17:25.164645 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 15 01:17:25.164652 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 15 01:17:25.164659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164667 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 15 01:17:25.164674 kernel: Detected PIPT I-cache on CPU78 May 15 01:17:25.164681 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 15 01:17:25.164690 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 15 01:17:25.164698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164705 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 15 01:17:25.164712 kernel: Detected PIPT I-cache on CPU79 May 15 01:17:25.164720 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 15 01:17:25.164727 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 15 01:17:25.164734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 01:17:25.164742 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 15 01:17:25.164749 kernel: smp: Brought up 1 node, 80 CPUs May 15 01:17:25.164758 kernel: SMP: Total of 80 processors activated. May 15 01:17:25.164766 kernel: CPU features: detected: 32-bit EL0 Support May 15 01:17:25.164773 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 01:17:25.164781 kernel: CPU features: detected: Common not Private translations May 15 01:17:25.164788 kernel: CPU features: detected: CRC32 instructions May 15 01:17:25.164795 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 01:17:25.164803 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 01:17:25.164810 kernel: CPU features: detected: LSE atomic instructions May 15 01:17:25.164818 kernel: CPU features: detected: Privileged Access Never May 15 01:17:25.164827 kernel: CPU features: detected: RAS Extension Support May 15 01:17:25.164834 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 01:17:25.164842 kernel: CPU: All CPU(s) started at EL2 May 15 01:17:25.164850 kernel: alternatives: applying system-wide alternatives May 15 01:17:25.164859 kernel: devtmpfs: initialized May 15 01:17:25.164867 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 01:17:25.164875 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 15 01:17:25.164882 kernel: pinctrl core: initialized pinctrl subsystem May 15 01:17:25.164890 kernel: SMBIOS 3.4.0 present. May 15 01:17:25.164899 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 15 01:17:25.164906 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 01:17:25.164914 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 15 01:17:25.164921 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 01:17:25.164929 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 01:17:25.164936 kernel: audit: initializing netlink subsys (disabled) May 15 01:17:25.164943 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 15 01:17:25.164951 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 01:17:25.164958 kernel: cpuidle: using governor menu May 15 01:17:25.164967 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 01:17:25.164974 kernel: ASID allocator initialised with 32768 entries May 15 01:17:25.164982 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 01:17:25.164989 kernel: Serial: AMBA PL011 UART driver May 15 01:17:25.164997 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 01:17:25.165004 kernel: Modules: 0 pages in range for non-PLT usage May 15 01:17:25.165011 kernel: Modules: 509264 pages in range for PLT usage May 15 01:17:25.165019 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 01:17:25.165026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 01:17:25.165035 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 01:17:25.165043 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 01:17:25.165050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 01:17:25.165057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 01:17:25.165065 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 01:17:25.165073 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 01:17:25.165080 kernel: ACPI: Added _OSI(Module Device) May 15 01:17:25.165087 kernel: ACPI: Added _OSI(Processor Device) May 15 01:17:25.165094 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 01:17:25.165103 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 01:17:25.165111 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 15 01:17:25.165118 kernel: ACPI: Interpreter enabled May 15 01:17:25.165125 kernel: ACPI: Using GIC for interrupt routing May 15 01:17:25.165133 kernel: ACPI: MCFG table detected, 8 entries May 15 01:17:25.165140 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165148 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165155 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165163 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165172 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165179 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165186 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165194 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 15 01:17:25.165201 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 15 01:17:25.165209 kernel: printk: console [ttyAMA0] enabled May 15 01:17:25.165216 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 15 01:17:25.165224 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 15 01:17:25.165359 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.165433 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.165496 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.165557 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.165618 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 15 01:17:25.165678 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 15 01:17:25.165688 kernel: PCI host bridge to bus 000d:00 May 15 01:17:25.165763 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 15 01:17:25.165821 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 15 01:17:25.165884 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 15 01:17:25.165961 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 15 01:17:25.166036 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 15 01:17:25.166102 kernel: pci 000d:00:01.0: enabling Extended Tags May 15 01:17:25.166169 kernel: pci 000d:00:01.0: supports D1 D2 May 15 01:17:25.166235 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.166308 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 15 01:17:25.166374 kernel: pci 000d:00:02.0: supports D1 D2 May 15 01:17:25.166438 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 15 01:17:25.166511 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 15 01:17:25.166577 kernel: pci 000d:00:03.0: supports D1 D2 May 15 01:17:25.166646 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.166720 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 15 01:17:25.166785 kernel: pci 000d:00:04.0: supports D1 D2 May 15 01:17:25.166850 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 15 01:17:25.166864 kernel: acpiphp: Slot [1] registered May 15 01:17:25.166871 kernel: acpiphp: Slot [2] registered May 15 01:17:25.166879 kernel: acpiphp: Slot [3] registered May 15 01:17:25.166889 kernel: acpiphp: Slot [4] registered May 15 01:17:25.166948 kernel: pci_bus 000d:00: on NUMA node 0 May 15 01:17:25.167015 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.167081 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.167149 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.167215 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.167280 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.167344 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.167412 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.167476 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.167539 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.167606 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.167670 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.167734 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.167800 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 15 01:17:25.167868 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 15 01:17:25.167933 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 15 01:17:25.167997 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 15 01:17:25.168062 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 15 01:17:25.168125 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 15 01:17:25.168190 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 15 01:17:25.168254 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 15 01:17:25.168320 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.168385 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.168448 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.168513 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.168576 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.168641 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.168704 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.168771 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.168835 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.168902 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.168967 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.169031 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.169096 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.169159 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.169224 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.169287 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.169355 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.169419 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 15 01:17:25.169483 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 15 01:17:25.169548 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 15 01:17:25.169611 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 15 01:17:25.169675 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 15 01:17:25.169739 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 15 01:17:25.169805 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 15 01:17:25.169873 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 15 01:17:25.169937 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 15 01:17:25.170002 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 15 01:17:25.170065 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 15 01:17:25.170125 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 15 01:17:25.170184 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 15 01:17:25.170257 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 15 01:17:25.170317 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 15 01:17:25.170384 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 15 01:17:25.170445 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 15 01:17:25.170519 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 15 01:17:25.170582 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 15 01:17:25.170648 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 15 01:17:25.170708 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 15 01:17:25.170718 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 15 01:17:25.170786 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.170850 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.170920 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.170982 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.171045 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 15 01:17:25.171105 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 15 01:17:25.171115 kernel: PCI host bridge to bus 0000:00 May 15 01:17:25.171181 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 15 01:17:25.171238 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 15 01:17:25.171297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 01:17:25.171370 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 15 01:17:25.171442 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 15 01:17:25.171507 kernel: pci 0000:00:01.0: enabling Extended Tags May 15 01:17:25.171571 kernel: pci 0000:00:01.0: supports D1 D2 May 15 01:17:25.171636 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.171708 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 15 01:17:25.171776 kernel: pci 0000:00:02.0: supports D1 D2 May 15 01:17:25.171839 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 15 01:17:25.171916 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 15 01:17:25.171982 kernel: pci 0000:00:03.0: supports D1 D2 May 15 01:17:25.172045 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.172117 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 15 01:17:25.172180 kernel: pci 0000:00:04.0: supports D1 D2 May 15 01:17:25.172246 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 15 01:17:25.172256 kernel: acpiphp: Slot [1-1] registered May 15 01:17:25.172264 kernel: acpiphp: Slot [2-1] registered May 15 01:17:25.172271 kernel: acpiphp: Slot [3-1] registered May 15 01:17:25.172279 kernel: acpiphp: Slot [4-1] registered May 15 01:17:25.172333 kernel: pci_bus 0000:00: on NUMA node 0 May 15 01:17:25.172398 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.172461 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.172528 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.172592 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.172655 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.172719 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.172782 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.172847 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.172914 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.172981 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.173046 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.173109 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.173173 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 15 01:17:25.173238 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 15 01:17:25.173303 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 15 01:17:25.173366 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 15 01:17:25.173432 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 15 01:17:25.173495 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 15 01:17:25.173559 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 15 01:17:25.173623 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 15 01:17:25.173687 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.173751 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.173813 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.173883 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.173949 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.174013 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.174077 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.174140 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.174203 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.174267 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.174331 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.174394 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.174458 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.174522 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.174587 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.174651 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.174714 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.174778 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 15 01:17:25.174841 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 15 01:17:25.174910 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 15 01:17:25.174975 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 15 01:17:25.175039 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 15 01:17:25.175104 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 15 01:17:25.175168 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 15 01:17:25.175234 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 15 01:17:25.175298 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 15 01:17:25.175362 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 15 01:17:25.175426 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 15 01:17:25.175485 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 15 01:17:25.175542 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 15 01:17:25.175612 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 15 01:17:25.175673 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 15 01:17:25.175740 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 15 01:17:25.175800 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 15 01:17:25.175877 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 15 01:17:25.175940 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 15 01:17:25.176009 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 15 01:17:25.176070 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 15 01:17:25.176080 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 15 01:17:25.176150 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.176213 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.176275 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.176339 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.176400 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 15 01:17:25.176461 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 15 01:17:25.176471 kernel: PCI host bridge to bus 0005:00 May 15 01:17:25.176535 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 15 01:17:25.176593 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 15 01:17:25.176649 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 15 01:17:25.176725 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 15 01:17:25.176798 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 15 01:17:25.176867 kernel: pci 0005:00:01.0: supports D1 D2 May 15 01:17:25.176931 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.177004 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 15 01:17:25.177069 kernel: pci 0005:00:03.0: supports D1 D2 May 15 01:17:25.177137 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.177211 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 15 01:17:25.177276 kernel: pci 0005:00:05.0: supports D1 D2 May 15 01:17:25.177342 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 15 01:17:25.177413 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 15 01:17:25.177481 kernel: pci 0005:00:07.0: supports D1 D2 May 15 01:17:25.177547 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 15 01:17:25.177557 kernel: acpiphp: Slot [1-2] registered May 15 01:17:25.177568 kernel: acpiphp: Slot [2-2] registered May 15 01:17:25.177641 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 15 01:17:25.177708 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 15 01:17:25.177774 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 15 01:17:25.177851 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 15 01:17:25.177926 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 15 01:17:25.177993 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 15 01:17:25.178054 kernel: pci_bus 0005:00: on NUMA node 0 May 15 01:17:25.178119 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.178184 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.178247 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.178329 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.178425 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.178493 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.178561 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.178625 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.178690 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 15 01:17:25.178767 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.178833 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.178905 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 15 01:17:25.178974 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 15 01:17:25.179038 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 15 01:17:25.179104 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 15 01:17:25.179168 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 15 01:17:25.179232 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 15 01:17:25.179296 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 15 01:17:25.179359 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 15 01:17:25.179424 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 15 01:17:25.179489 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.179553 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.179618 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.179681 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.179746 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.179808 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.179876 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.179939 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.180007 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.180070 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.180136 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.180201 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.180264 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.180329 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.180391 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.180457 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.180520 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.180584 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 15 01:17:25.180651 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 15 01:17:25.180714 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 15 01:17:25.180781 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 15 01:17:25.180845 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 15 01:17:25.180917 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 15 01:17:25.180984 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 15 01:17:25.181051 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 15 01:17:25.181118 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 15 01:17:25.181181 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 15 01:17:25.181249 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 15 01:17:25.181314 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 15 01:17:25.181379 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 15 01:17:25.181442 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 15 01:17:25.181509 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 15 01:17:25.181568 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 15 01:17:25.181627 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 15 01:17:25.181698 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 15 01:17:25.181757 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 15 01:17:25.181833 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 15 01:17:25.181900 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 15 01:17:25.181969 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 15 01:17:25.182028 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 15 01:17:25.182095 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 15 01:17:25.182156 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 15 01:17:25.182168 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 15 01:17:25.182238 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.182301 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.182361 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.182423 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.182485 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 15 01:17:25.182547 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 15 01:17:25.182559 kernel: PCI host bridge to bus 0003:00 May 15 01:17:25.182626 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 15 01:17:25.182686 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 15 01:17:25.182744 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 15 01:17:25.182814 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 15 01:17:25.182893 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 15 01:17:25.182964 kernel: pci 0003:00:01.0: supports D1 D2 May 15 01:17:25.183033 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.183103 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 15 01:17:25.183168 kernel: pci 0003:00:03.0: supports D1 D2 May 15 01:17:25.183233 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.183304 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 15 01:17:25.183368 kernel: pci 0003:00:05.0: supports D1 D2 May 15 01:17:25.183437 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 15 01:17:25.183448 kernel: acpiphp: Slot [1-3] registered May 15 01:17:25.183456 kernel: acpiphp: Slot [2-3] registered May 15 01:17:25.183533 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 15 01:17:25.183602 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 15 01:17:25.183671 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 15 01:17:25.183739 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 15 01:17:25.183804 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 15 01:17:25.183878 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 15 01:17:25.183945 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 15 01:17:25.184012 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 15 01:17:25.184078 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 15 01:17:25.184143 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 15 01:17:25.184216 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 15 01:17:25.184282 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 15 01:17:25.184350 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 15 01:17:25.184418 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 15 01:17:25.184484 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 15 01:17:25.184550 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 15 01:17:25.184615 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 15 01:17:25.184681 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 15 01:17:25.184750 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 15 01:17:25.184810 kernel: pci_bus 0003:00: on NUMA node 0 May 15 01:17:25.184880 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.184944 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.185009 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.185073 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.185137 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.185200 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.185269 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 15 01:17:25.185334 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 15 01:17:25.185397 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 15 01:17:25.185461 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 15 01:17:25.185525 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 15 01:17:25.185600 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 15 01:17:25.185667 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 15 01:17:25.185732 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 15 01:17:25.185799 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.185866 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.185932 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.185995 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.186059 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.186122 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.186187 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.186249 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.186316 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.186379 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.186444 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.186508 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.186572 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.186636 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 15 01:17:25.186700 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 15 01:17:25.186765 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 15 01:17:25.186831 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 15 01:17:25.187138 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 15 01:17:25.187212 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 15 01:17:25.187279 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 15 01:17:25.187344 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 15 01:17:25.187410 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 15 01:17:25.187478 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 15 01:17:25.187542 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 15 01:17:25.187607 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 15 01:17:25.187670 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 15 01:17:25.187734 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 15 01:17:25.187799 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 15 01:17:25.187867 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 15 01:17:25.187936 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 15 01:17:25.188000 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 15 01:17:25.188065 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 15 01:17:25.188129 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 15 01:17:25.188193 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 15 01:17:25.188257 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 15 01:17:25.188320 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 15 01:17:25.188383 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 15 01:17:25.188443 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 15 01:17:25.188499 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 15 01:17:25.188554 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 15 01:17:25.188636 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 15 01:17:25.188695 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 15 01:17:25.188762 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 15 01:17:25.188823 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 15 01:17:25.188892 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 15 01:17:25.188951 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 15 01:17:25.188962 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 15 01:17:25.189029 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.189091 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.189152 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.189213 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.189273 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 15 01:17:25.189332 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 15 01:17:25.189343 kernel: PCI host bridge to bus 000c:00 May 15 01:17:25.189406 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 15 01:17:25.189462 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 15 01:17:25.189520 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 15 01:17:25.189591 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 15 01:17:25.189663 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 15 01:17:25.189727 kernel: pci 000c:00:01.0: enabling Extended Tags May 15 01:17:25.189791 kernel: pci 000c:00:01.0: supports D1 D2 May 15 01:17:25.189860 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.189934 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 15 01:17:25.190002 kernel: pci 000c:00:02.0: supports D1 D2 May 15 01:17:25.190064 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 15 01:17:25.190136 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 15 01:17:25.190199 kernel: pci 000c:00:03.0: supports D1 D2 May 15 01:17:25.190264 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.190335 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 15 01:17:25.190398 kernel: pci 000c:00:04.0: supports D1 D2 May 15 01:17:25.190464 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 15 01:17:25.190474 kernel: acpiphp: Slot [1-4] registered May 15 01:17:25.190482 kernel: acpiphp: Slot [2-4] registered May 15 01:17:25.190490 kernel: acpiphp: Slot [3-2] registered May 15 01:17:25.190497 kernel: acpiphp: Slot [4-2] registered May 15 01:17:25.190552 kernel: pci_bus 000c:00: on NUMA node 0 May 15 01:17:25.190616 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.190679 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.190745 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.190808 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.190875 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.190939 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.191002 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.191065 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.191128 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.191193 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.191256 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.191319 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.191382 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 15 01:17:25.191444 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 15 01:17:25.191507 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 15 01:17:25.191570 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 15 01:17:25.191635 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 15 01:17:25.191697 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 15 01:17:25.191761 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 15 01:17:25.191823 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 15 01:17:25.191890 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.191954 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192017 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192080 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192145 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192211 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192274 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192337 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192400 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192462 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192526 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192587 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192650 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192715 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192778 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.192841 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.192908 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.192972 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 15 01:17:25.193035 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 15 01:17:25.193098 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 15 01:17:25.193161 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 15 01:17:25.193227 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 15 01:17:25.193289 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 15 01:17:25.193352 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 15 01:17:25.193415 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 15 01:17:25.193478 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 15 01:17:25.193541 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 15 01:17:25.193606 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 15 01:17:25.193664 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 15 01:17:25.193720 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 15 01:17:25.193789 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 15 01:17:25.193847 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 15 01:17:25.193925 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 15 01:17:25.193988 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 15 01:17:25.194053 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 15 01:17:25.194112 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 15 01:17:25.194178 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 15 01:17:25.194237 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 15 01:17:25.194247 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 15 01:17:25.194317 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.194379 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.194440 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.194500 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.194561 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 15 01:17:25.194620 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 15 01:17:25.194630 kernel: PCI host bridge to bus 0002:00 May 15 01:17:25.194697 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 15 01:17:25.194753 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 15 01:17:25.194810 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 15 01:17:25.194884 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 15 01:17:25.194955 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 15 01:17:25.195019 kernel: pci 0002:00:01.0: supports D1 D2 May 15 01:17:25.195081 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.195153 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 15 01:17:25.195217 kernel: pci 0002:00:03.0: supports D1 D2 May 15 01:17:25.195280 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.195349 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 15 01:17:25.195413 kernel: pci 0002:00:05.0: supports D1 D2 May 15 01:17:25.195477 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 15 01:17:25.195547 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 15 01:17:25.195614 kernel: pci 0002:00:07.0: supports D1 D2 May 15 01:17:25.195677 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 15 01:17:25.195687 kernel: acpiphp: Slot [1-5] registered May 15 01:17:25.195695 kernel: acpiphp: Slot [2-5] registered May 15 01:17:25.195703 kernel: acpiphp: Slot [3-3] registered May 15 01:17:25.195711 kernel: acpiphp: Slot [4-3] registered May 15 01:17:25.195766 kernel: pci_bus 0002:00: on NUMA node 0 May 15 01:17:25.195830 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.195899 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.195962 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 01:17:25.196028 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.196092 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.196154 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.196220 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.196283 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.196346 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.196409 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.196472 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.196536 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.196601 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 15 01:17:25.196664 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 15 01:17:25.196726 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 15 01:17:25.196789 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 15 01:17:25.196851 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 15 01:17:25.196918 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 15 01:17:25.196981 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 15 01:17:25.197045 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 15 01:17:25.197111 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197173 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.197236 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197299 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.197362 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197425 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.197489 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197553 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.197618 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197681 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.197746 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197810 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.197876 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.197940 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.198003 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.198068 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.198131 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.198197 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 15 01:17:25.198263 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 15 01:17:25.198326 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 15 01:17:25.198390 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 15 01:17:25.198454 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 15 01:17:25.198517 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 15 01:17:25.198583 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 15 01:17:25.198647 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 15 01:17:25.198711 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 15 01:17:25.198774 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 15 01:17:25.198838 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 15 01:17:25.198981 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 15 01:17:25.199041 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 15 01:17:25.199113 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 15 01:17:25.199171 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 15 01:17:25.199236 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 15 01:17:25.199308 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 15 01:17:25.199384 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 15 01:17:25.199442 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 15 01:17:25.199510 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 15 01:17:25.199568 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 15 01:17:25.199578 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 15 01:17:25.199645 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.199706 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.199765 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.199829 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.199894 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 15 01:17:25.199955 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 15 01:17:25.199965 kernel: PCI host bridge to bus 0001:00 May 15 01:17:25.200029 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 15 01:17:25.200085 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 15 01:17:25.200142 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 15 01:17:25.200213 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 15 01:17:25.200284 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 15 01:17:25.200349 kernel: pci 0001:00:01.0: enabling Extended Tags May 15 01:17:25.200411 kernel: pci 0001:00:01.0: supports D1 D2 May 15 01:17:25.200474 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.200545 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 15 01:17:25.200611 kernel: pci 0001:00:02.0: supports D1 D2 May 15 01:17:25.200673 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 15 01:17:25.200742 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 15 01:17:25.200805 kernel: pci 0001:00:03.0: supports D1 D2 May 15 01:17:25.200875 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.200946 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 15 01:17:25.201009 kernel: pci 0001:00:04.0: supports D1 D2 May 15 01:17:25.201076 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 15 01:17:25.201086 kernel: acpiphp: Slot [1-6] registered May 15 01:17:25.201161 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 15 01:17:25.201226 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 15 01:17:25.201293 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 15 01:17:25.201358 kernel: pci 0001:01:00.0: PME# supported from D3cold May 15 01:17:25.201423 kernel: pci 0001:01:00.0: reg 0x1a4: [mem 0x380004800000-0x3800048fffff 64bit pref] May 15 01:17:25.201491 kernel: pci 0001:01:00.0: VF(n) BAR0 space: [mem 0x380004800000-0x380004ffffff 64bit pref] (contains BAR0 for 8 VFs) May 15 01:17:25.201557 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 15 01:17:25.201632 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 15 01:17:25.201698 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 15 01:17:25.201764 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 15 01:17:25.201829 kernel: pci 0001:01:00.1: PME# supported from D3cold May 15 01:17:25.201899 kernel: pci 0001:01:00.1: reg 0x1a4: [mem 0x380004000000-0x3800040fffff 64bit pref] May 15 01:17:25.201967 kernel: pci 0001:01:00.1: VF(n) BAR0 space: [mem 0x380004000000-0x3800047fffff 64bit pref] (contains BAR0 for 8 VFs) May 15 01:17:25.201977 kernel: acpiphp: Slot [2-6] registered May 15 01:17:25.201985 kernel: acpiphp: Slot [3-4] registered May 15 01:17:25.201993 kernel: acpiphp: Slot [4-4] registered May 15 01:17:25.202049 kernel: pci_bus 0001:00: on NUMA node 0 May 15 01:17:25.202113 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 01:17:25.202187 kernel: pci 0001:00:01.0: bridge window [mem 0x02000000-0x05ffffff 64bit pref] to [bus 01] add_size 2000000 add_align 2000000 May 15 01:17:25.202254 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 01:17:25.202320 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.202384 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 01:17:25.202448 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.202511 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.202574 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.202637 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.202703 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.202766 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.202830 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380005ffffff 64bit pref] May 15 01:17:25.202936 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 15 01:17:25.203001 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 15 01:17:25.203064 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380006000000-0x3800061fffff 64bit pref] May 15 01:17:25.203127 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 15 01:17:25.203189 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380006200000-0x3800063fffff 64bit pref] May 15 01:17:25.203255 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 15 01:17:25.203317 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380006400000-0x3800065fffff 64bit pref] May 15 01:17:25.203380 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.203443 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.203506 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.203568 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.203631 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.203694 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.203760 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.203822 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.203888 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.203953 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.204015 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.204078 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.204140 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.204203 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.204265 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.204330 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.204397 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 15 01:17:25.204462 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 15 01:17:25.204527 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 15 01:17:25.204591 kernel: pci 0001:01:00.0: BAR 7: assigned [mem 0x380004000000-0x3800047fffff 64bit pref] May 15 01:17:25.204657 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 15 01:17:25.204721 kernel: pci 0001:01:00.1: BAR 7: assigned [mem 0x380004800000-0x380004ffffff 64bit pref] May 15 01:17:25.204784 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 15 01:17:25.204849 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 15 01:17:25.204915 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380005ffffff 64bit pref] May 15 01:17:25.204981 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 15 01:17:25.205044 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 15 01:17:25.205107 kernel: pci 0001:00:02.0: bridge window [mem 0x380006000000-0x3800061fffff 64bit pref] May 15 01:17:25.205170 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 15 01:17:25.205235 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 15 01:17:25.205299 kernel: pci 0001:00:03.0: bridge window [mem 0x380006200000-0x3800063fffff 64bit pref] May 15 01:17:25.205362 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 15 01:17:25.205425 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 15 01:17:25.205488 kernel: pci 0001:00:04.0: bridge window [mem 0x380006400000-0x3800065fffff 64bit pref] May 15 01:17:25.205546 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 15 01:17:25.205603 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 15 01:17:25.205679 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 15 01:17:25.205738 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380005ffffff 64bit pref] May 15 01:17:25.205806 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 15 01:17:25.205870 kernel: pci_bus 0001:02: resource 2 [mem 0x380006000000-0x3800061fffff 64bit pref] May 15 01:17:25.205936 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 15 01:17:25.205998 kernel: pci_bus 0001:03: resource 2 [mem 0x380006200000-0x3800063fffff 64bit pref] May 15 01:17:25.206063 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 15 01:17:25.206123 kernel: pci_bus 0001:04: resource 2 [mem 0x380006400000-0x3800065fffff 64bit pref] May 15 01:17:25.206133 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 15 01:17:25.206201 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:17:25.206263 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 01:17:25.206324 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 15 01:17:25.206387 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 01:17:25.206448 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 15 01:17:25.206508 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 15 01:17:25.206518 kernel: PCI host bridge to bus 0004:00 May 15 01:17:25.206582 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 15 01:17:25.206639 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 15 01:17:25.206695 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 15 01:17:25.206768 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 15 01:17:25.206839 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 15 01:17:25.206908 kernel: pci 0004:00:01.0: supports D1 D2 May 15 01:17:25.206971 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 15 01:17:25.207041 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 15 01:17:25.207105 kernel: pci 0004:00:03.0: supports D1 D2 May 15 01:17:25.207169 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 15 01:17:25.207243 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 15 01:17:25.207307 kernel: pci 0004:00:05.0: supports D1 D2 May 15 01:17:25.207371 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 15 01:17:25.207442 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 15 01:17:25.207508 kernel: pci 0004:01:00.0: enabling Extended Tags May 15 01:17:25.207572 kernel: pci 0004:01:00.0: supports D1 D2 May 15 01:17:25.207637 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 15 01:17:25.207716 kernel: pci_bus 0004:02: extended config space not accessible May 15 01:17:25.207794 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 15 01:17:25.207866 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 15 01:17:25.207935 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 15 01:17:25.208003 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 15 01:17:25.208071 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 15 01:17:25.208137 kernel: pci 0004:02:00.0: supports D1 D2 May 15 01:17:25.208209 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 15 01:17:25.208282 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 15 01:17:25.208347 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 15 01:17:25.208413 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 15 01:17:25.208470 kernel: pci_bus 0004:00: on NUMA node 0 May 15 01:17:25.208536 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 15 01:17:25.208601 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 01:17:25.208669 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:17:25.208733 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 15 01:17:25.208799 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 01:17:25.208869 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.208933 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 01:17:25.208999 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 15 01:17:25.209062 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 15 01:17:25.209130 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 15 01:17:25.209193 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 15 01:17:25.209258 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 15 01:17:25.209324 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 15 01:17:25.209387 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.209453 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.209517 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.209582 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.209647 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.209712 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.209779 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.209842 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.209911 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.209974 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.210039 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.210103 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.210170 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 15 01:17:25.210240 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 15 01:17:25.210308 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 15 01:17:25.210379 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 15 01:17:25.210448 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 15 01:17:25.210517 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 15 01:17:25.210586 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 15 01:17:25.210651 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 15 01:17:25.210720 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 15 01:17:25.210785 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 15 01:17:25.210850 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 15 01:17:25.210917 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 15 01:17:25.210986 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 15 01:17:25.211051 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 15 01:17:25.211115 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 15 01:17:25.211181 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 15 01:17:25.211247 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 15 01:17:25.211311 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 15 01:17:25.211376 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 15 01:17:25.211437 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 15 01:17:25.211494 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 15 01:17:25.211551 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 15 01:17:25.211624 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 15 01:17:25.211684 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 15 01:17:25.211748 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 15 01:17:25.211815 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 15 01:17:25.211880 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 15 01:17:25.211947 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 15 01:17:25.212011 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 15 01:17:25.212021 kernel: iommu: Default domain type: Translated May 15 01:17:25.212029 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 01:17:25.212037 kernel: efivars: Registered efivars operations May 15 01:17:25.212106 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 15 01:17:25.212175 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 15 01:17:25.212245 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 15 01:17:25.212256 kernel: vgaarb: loaded May 15 01:17:25.212266 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 01:17:25.212274 kernel: VFS: Disk quotas dquot_6.6.0 May 15 01:17:25.212282 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 01:17:25.212289 kernel: pnp: PnP ACPI init May 15 01:17:25.212360 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 15 01:17:25.212423 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 15 01:17:25.212482 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 15 01:17:25.212542 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 15 01:17:25.212601 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 15 01:17:25.212658 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 15 01:17:25.212717 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 15 01:17:25.212775 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 15 01:17:25.212785 kernel: pnp: PnP ACPI: found 1 devices May 15 01:17:25.212793 kernel: NET: Registered PF_INET protocol family May 15 01:17:25.212801 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 01:17:25.212811 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 15 01:17:25.212819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 01:17:25.212828 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 01:17:25.212835 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 01:17:25.212843 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 15 01:17:25.212851 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 01:17:25.212862 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 01:17:25.212870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 01:17:25.212941 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 15 01:17:25.212952 kernel: kvm [1]: IPA Size Limit: 48 bits May 15 01:17:25.212960 kernel: kvm [1]: GICv3: no GICV resource entry May 15 01:17:25.212968 kernel: kvm [1]: disabling GICv2 emulation May 15 01:17:25.212976 kernel: kvm [1]: GIC system register CPU interface enabled May 15 01:17:25.212984 kernel: kvm [1]: vgic interrupt IRQ9 May 15 01:17:25.212992 kernel: kvm [1]: VHE mode initialized successfully May 15 01:17:25.213000 kernel: Initialise system trusted keyrings May 15 01:17:25.213007 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 15 01:17:25.213015 kernel: Key type asymmetric registered May 15 01:17:25.213025 kernel: Asymmetric key parser 'x509' registered May 15 01:17:25.213033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 01:17:25.213040 kernel: io scheduler mq-deadline registered May 15 01:17:25.213048 kernel: io scheduler kyber registered May 15 01:17:25.213056 kernel: io scheduler bfq registered May 15 01:17:25.213064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 01:17:25.213072 kernel: ACPI: button: Power Button [PWRB] May 15 01:17:25.213080 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 15 01:17:25.213088 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 01:17:25.213162 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 15 01:17:25.213224 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.213285 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.213345 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 15 01:17:25.213406 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 15 01:17:25.213465 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 15 01:17:25.213537 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 15 01:17:25.213597 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.213660 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.213720 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 15 01:17:25.213779 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 15 01:17:25.213839 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 15 01:17:25.213911 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 15 01:17:25.213976 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.214035 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.214095 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 15 01:17:25.214155 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 15 01:17:25.214215 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 15 01:17:25.214282 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 15 01:17:25.214344 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.214404 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.214464 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 15 01:17:25.214524 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 15 01:17:25.214583 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 15 01:17:25.214660 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 15 01:17:25.214722 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.214783 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.214843 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 15 01:17:25.215089 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 15 01:17:25.215152 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 15 01:17:25.215223 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 15 01:17:25.215283 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.215346 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.215404 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 15 01:17:25.215462 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 15 01:17:25.215520 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 15 01:17:25.215586 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 15 01:17:25.215645 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.215703 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.215763 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 15 01:17:25.215820 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 15 01:17:25.215883 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 15 01:17:25.215948 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 15 01:17:25.216007 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 15 01:17:25.216065 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 01:17:25.216126 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 15 01:17:25.216184 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 15 01:17:25.216242 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 15 01:17:25.216252 kernel: thunder_xcv, ver 1.0 May 15 01:17:25.216260 kernel: thunder_bgx, ver 1.0 May 15 01:17:25.216268 kernel: nicpf, ver 1.0 May 15 01:17:25.216276 kernel: nicvf, ver 1.0 May 15 01:17:25.216340 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 01:17:25.216401 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T01:17:23 UTC (1747271843) May 15 01:17:25.216411 kernel: efifb: probing for efifb May 15 01:17:25.216419 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 15 01:17:25.216427 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 15 01:17:25.216435 kernel: efifb: scrolling: redraw May 15 01:17:25.216443 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 01:17:25.216451 kernel: Console: switching to colour frame buffer device 100x37 May 15 01:17:25.216459 kernel: fb0: EFI VGA frame buffer device May 15 01:17:25.216468 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 15 01:17:25.216476 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 01:17:25.216484 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 01:17:25.216492 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 01:17:25.216500 kernel: watchdog: Hard watchdog permanently disabled May 15 01:17:25.216508 kernel: NET: Registered PF_INET6 protocol family May 15 01:17:25.216516 kernel: Segment Routing with IPv6 May 15 01:17:25.216524 kernel: In-situ OAM (IOAM) with IPv6 May 15 01:17:25.216531 kernel: NET: Registered PF_PACKET protocol family May 15 01:17:25.216541 kernel: Key type dns_resolver registered May 15 01:17:25.216548 kernel: registered taskstats version 1 May 15 01:17:25.216556 kernel: Loading compiled-in X.509 certificates May 15 01:17:25.216564 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 15 01:17:25.216572 kernel: Key type .fscrypt registered May 15 01:17:25.216580 kernel: Key type fscrypt-provisioning registered May 15 01:17:25.216587 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 01:17:25.216595 kernel: ima: Allocated hash algorithm: sha1 May 15 01:17:25.216603 kernel: ima: No architecture policies found May 15 01:17:25.216611 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 01:17:25.216680 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 15 01:17:25.216744 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 15 01:17:25.216809 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 15 01:17:25.216875 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 15 01:17:25.216940 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 15 01:17:25.217007 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 15 01:17:25.217071 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 15 01:17:25.217134 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 15 01:17:25.217202 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 15 01:17:25.217266 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 15 01:17:25.217331 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 15 01:17:25.217394 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 15 01:17:25.217460 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 15 01:17:25.217524 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 15 01:17:25.217589 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 15 01:17:25.217652 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 15 01:17:25.217720 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 15 01:17:25.217782 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 15 01:17:25.217847 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 15 01:17:25.217914 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 15 01:17:25.217977 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 15 01:17:25.218041 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 15 01:17:25.218104 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 15 01:17:25.218167 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 15 01:17:25.218235 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 15 01:17:25.218299 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 15 01:17:25.218362 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 15 01:17:25.218426 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 15 01:17:25.218490 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 15 01:17:25.218553 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 15 01:17:25.218619 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 15 01:17:25.218681 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 15 01:17:25.218747 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 15 01:17:25.218813 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 15 01:17:25.218882 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 15 01:17:25.218946 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 15 01:17:25.219011 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 15 01:17:25.219074 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 15 01:17:25.219139 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 15 01:17:25.219202 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 15 01:17:25.219266 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 15 01:17:25.219333 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 15 01:17:25.219397 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 15 01:17:25.219461 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 15 01:17:25.219525 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 15 01:17:25.219588 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 15 01:17:25.219653 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 15 01:17:25.219716 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 15 01:17:25.219781 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 15 01:17:25.219847 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 15 01:17:25.219915 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 15 01:17:25.219979 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 15 01:17:25.220043 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 15 01:17:25.220107 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 15 01:17:25.220170 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 15 01:17:25.220234 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 15 01:17:25.220300 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 15 01:17:25.220363 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 15 01:17:25.220426 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 15 01:17:25.220490 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 15 01:17:25.220556 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 15 01:17:25.220567 kernel: clk: Disabling unused clocks May 15 01:17:25.220575 kernel: Freeing unused kernel memory: 38336K May 15 01:17:25.220583 kernel: Run /init as init process May 15 01:17:25.220591 kernel: with arguments: May 15 01:17:25.220601 kernel: /init May 15 01:17:25.220610 kernel: with environment: May 15 01:17:25.220618 kernel: HOME=/ May 15 01:17:25.220625 kernel: TERM=linux May 15 01:17:25.220633 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 01:17:25.220642 systemd[1]: Successfully made /usr/ read-only. May 15 01:17:25.220653 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 01:17:25.220661 systemd[1]: Detected architecture arm64. May 15 01:17:25.220671 systemd[1]: Running in initrd. May 15 01:17:25.220679 systemd[1]: No hostname configured, using default hostname. May 15 01:17:25.220687 systemd[1]: Hostname set to . May 15 01:17:25.220696 systemd[1]: Initializing machine ID from random generator. May 15 01:17:25.220704 systemd[1]: Queued start job for default target initrd.target. May 15 01:17:25.220712 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 01:17:25.220720 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 01:17:25.220729 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 01:17:25.220739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 01:17:25.220748 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 01:17:25.220756 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 01:17:25.220766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 01:17:25.220774 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 01:17:25.220783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 01:17:25.220791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 01:17:25.220800 systemd[1]: Reached target paths.target - Path Units. May 15 01:17:25.220808 systemd[1]: Reached target slices.target - Slice Units. May 15 01:17:25.220817 systemd[1]: Reached target swap.target - Swaps. May 15 01:17:25.220825 systemd[1]: Reached target timers.target - Timer Units. May 15 01:17:25.220833 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 01:17:25.220841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 01:17:25.220849 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 01:17:25.220860 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 01:17:25.220870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 01:17:25.220879 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 01:17:25.220887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 01:17:25.220895 systemd[1]: Reached target sockets.target - Socket Units. May 15 01:17:25.220903 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 01:17:25.220911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 01:17:25.220919 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 01:17:25.220927 systemd[1]: Starting systemd-fsck-usr.service... May 15 01:17:25.220936 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 01:17:25.220946 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 01:17:25.220975 systemd-journald[902]: Collecting audit messages is disabled. May 15 01:17:25.220995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:17:25.221004 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 01:17:25.221014 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 01:17:25.221022 kernel: Bridge firewalling registered May 15 01:17:25.221031 systemd-journald[902]: Journal started May 15 01:17:25.221049 systemd-journald[902]: Runtime Journal (/run/log/journal/08f71c034315437aa15b0a7351b96d78) is 8M, max 4G, 3.9G free. May 15 01:17:25.178985 systemd-modules-load[906]: Inserted module 'overlay' May 15 01:17:25.254522 systemd[1]: Started systemd-journald.service - Journal Service. May 15 01:17:25.202386 systemd-modules-load[906]: Inserted module 'br_netfilter' May 15 01:17:25.260074 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 01:17:25.270705 systemd[1]: Finished systemd-fsck-usr.service. May 15 01:17:25.281328 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 01:17:25.291891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:17:25.312993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 01:17:25.318920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 01:17:25.336010 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 01:17:25.357860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 01:17:25.374692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:17:25.391232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 01:17:25.396929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 01:17:25.408181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 01:17:25.437950 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 01:17:25.450928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 01:17:25.459278 dracut-cmdline[946]: dracut-dracut-053 May 15 01:17:25.470131 dracut-cmdline[946]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 15 01:17:25.464410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 01:17:25.477443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 01:17:25.489919 systemd-resolved[951]: Positive Trust Anchors: May 15 01:17:25.489928 systemd-resolved[951]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 01:17:25.489959 systemd-resolved[951]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 01:17:25.504876 systemd-resolved[951]: Defaulting to hostname 'linux'. May 15 01:17:25.514750 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 01:17:25.533954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 01:17:25.639366 kernel: SCSI subsystem initialized May 15 01:17:25.650859 kernel: Loading iSCSI transport class v2.0-870. May 15 01:17:25.669863 kernel: iscsi: registered transport (tcp) May 15 01:17:25.696646 kernel: iscsi: registered transport (qla4xxx) May 15 01:17:25.696670 kernel: QLogic iSCSI HBA Driver May 15 01:17:25.739202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 01:17:25.763009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 01:17:25.807206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 01:17:25.807235 kernel: device-mapper: uevent: version 1.0.3 May 15 01:17:25.825861 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 01:17:25.881865 kernel: raid6: neonx8 gen() 15843 MB/s May 15 01:17:25.906864 kernel: raid6: neonx4 gen() 15883 MB/s May 15 01:17:25.931865 kernel: raid6: neonx2 gen() 13253 MB/s May 15 01:17:25.956863 kernel: raid6: neonx1 gen() 10583 MB/s May 15 01:17:25.981864 kernel: raid6: int64x8 gen() 6805 MB/s May 15 01:17:26.006864 kernel: raid6: int64x4 gen() 7387 MB/s May 15 01:17:26.031860 kernel: raid6: int64x2 gen() 6134 MB/s May 15 01:17:26.059658 kernel: raid6: int64x1 gen() 5077 MB/s May 15 01:17:26.059680 kernel: raid6: using algorithm neonx4 gen() 15883 MB/s May 15 01:17:26.094088 kernel: raid6: .... xor() 12399 MB/s, rmw enabled May 15 01:17:26.094109 kernel: raid6: using neon recovery algorithm May 15 01:17:26.116846 kernel: xor: measuring software checksum speed May 15 01:17:26.116881 kernel: 8regs : 21624 MB/sec May 15 01:17:26.124683 kernel: 32regs : 21704 MB/sec May 15 01:17:26.132345 kernel: arm64_neon : 28099 MB/sec May 15 01:17:26.139887 kernel: xor: using function: arm64_neon (28099 MB/sec) May 15 01:17:26.199869 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 01:17:26.209459 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 01:17:26.230999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 01:17:26.244537 systemd-udevd[1139]: Using default interface naming scheme 'v255'. May 15 01:17:26.248055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 01:17:26.264953 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 01:17:26.278774 dracut-pre-trigger[1149]: rd.md=0: removing MD RAID activation May 15 01:17:26.304915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 01:17:26.322949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 01:17:26.423327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 01:17:26.452354 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 01:17:26.452375 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 01:17:26.474763 kernel: ACPI: bus type USB registered May 15 01:17:26.474787 kernel: usbcore: registered new interface driver usbfs May 15 01:17:26.484425 kernel: usbcore: registered new interface driver hub May 15 01:17:26.484447 kernel: PTP clock support registered May 15 01:17:26.484465 kernel: usbcore: registered new device driver usb May 15 01:17:26.509981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 01:17:26.549985 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 15 01:17:26.549997 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 15 01:17:26.550007 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 15 01:17:26.520534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 01:17:26.649501 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 15 01:17:26.649611 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 15 01:17:26.649697 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 15 01:17:26.649777 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 15 01:17:26.649859 kernel: nvme 0005:03:00.0: Adding to iommu group 33 May 15 01:17:26.649948 kernel: nvme 0005:04:00.0: Adding to iommu group 34 May 15 01:17:26.650037 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 May 15 01:17:26.650124 kernel: igb 0003:03:00.0: added PHC on eth0 May 15 01:17:26.520595 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:17:26.644243 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 01:17:26.654944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 01:17:26.654994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:17:26.671296 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:17:26.764646 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 15 01:17:26.764785 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0d:9c:f2 May 15 01:17:26.764873 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 15 01:17:26.764953 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 15 01:17:26.765032 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 15 01:17:26.701953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:17:26.770400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 01:17:26.782878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:17:26.792815 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 01:17:26.803117 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 01:17:26.820152 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 01:17:26.841959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 01:17:26.848751 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 01:17:26.874546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 01:17:27.061421 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 15 01:17:27.061594 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 15 01:17:27.061691 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 15 01:17:27.061805 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 15 01:17:27.061906 kernel: nvme nvme0: pci function 0005:03:00.0 May 15 01:17:27.084800 kernel: hub 1-0:1.0: USB hub found May 15 01:17:27.084903 kernel: hub 1-0:1.0: 4 ports detected May 15 01:17:27.084981 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 15 01:17:27.085070 kernel: hub 2-0:1.0: USB hub found May 15 01:17:27.085157 kernel: hub 2-0:1.0: 4 ports detected May 15 01:17:27.085233 kernel: nvme nvme1: pci function 0005:04:00.0 May 15 01:17:27.085318 kernel: mlx5_core 0001:01:00.0: firmware version: 14.30.1004 May 15 01:17:27.085410 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 15 01:17:27.085489 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 15 01:17:27.085561 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 15 01:17:27.084849 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:17:27.142869 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 15 01:17:27.143054 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 15 01:17:27.143149 kernel: igb 0003:03:00.1: added PHC on eth1 May 15 01:17:27.149167 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 15 01:17:27.160649 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0d:9c:f3 May 15 01:17:27.172317 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 15 01:17:27.181902 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 15 01:17:27.212442 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 01:17:27.212476 kernel: GPT:9289727 != 1875385007 May 15 01:17:27.212486 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 15 01:17:27.212517 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 01:17:27.237245 kernel: GPT:9289727 != 1875385007 May 15 01:17:27.245103 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 01:17:27.254375 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 01:17:27.269862 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 15 01:17:27.284862 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (1205) May 15 01:17:27.284878 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 15 01:17:27.284983 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1210) May 15 01:17:27.295823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 15 01:17:27.350620 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 15 01:17:27.347864 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 15 01:17:27.382304 kernel: hub 1-3:1.0: USB hub found May 15 01:17:27.382444 kernel: hub 1-3:1.0: 4 ports detected May 15 01:17:27.378684 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 15 01:17:27.387408 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 15 01:17:27.410600 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 15 01:17:27.427944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 01:17:27.455584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 01:17:27.455689 disk-uuid[1312]: Primary Header is updated. May 15 01:17:27.455689 disk-uuid[1312]: Secondary Entries is updated. May 15 01:17:27.455689 disk-uuid[1312]: Secondary Header is updated. May 15 01:17:27.504327 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 15 01:17:27.504582 kernel: hub 2-3:1.0: USB hub found May 15 01:17:27.504700 kernel: hub 2-3:1.0: 4 ports detected May 15 01:17:27.612869 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 15 01:17:27.625860 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 15 01:17:27.649554 kernel: mlx5_core 0001:01:00.1: firmware version: 14.30.1004 May 15 01:17:27.649637 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 15 01:17:27.933608 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 15 01:17:28.215871 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 15 01:17:28.231860 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 15 01:17:28.250861 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 15 01:17:28.455872 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 01:17:28.456081 disk-uuid[1313]: The operation has completed successfully. May 15 01:17:28.485252 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 01:17:28.485354 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 01:17:28.536026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 01:17:28.546604 sh[1482]: Success May 15 01:17:28.567861 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 01:17:28.601182 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 01:17:28.627044 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 01:17:28.637260 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 01:17:28.643860 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 15 01:17:28.643877 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 01:17:28.643887 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 01:17:28.643897 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 01:17:28.643906 kernel: BTRFS info (device dm-0): using free space tree May 15 01:17:28.646858 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 15 01:17:28.729528 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 01:17:28.736075 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 01:17:28.744029 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 01:17:28.757461 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 01:17:28.868371 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 01:17:28.868390 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 15 01:17:28.868400 kernel: BTRFS info (device nvme0n1p6): using free space tree May 15 01:17:28.868410 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 15 01:17:28.868420 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 15 01:17:28.868429 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 01:17:28.858703 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 01:17:28.874134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 01:17:28.900972 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 01:17:28.907893 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 01:17:28.936291 systemd-networkd[1672]: lo: Link UP May 15 01:17:28.936296 systemd-networkd[1672]: lo: Gained carrier May 15 01:17:28.940346 systemd-networkd[1672]: Enumeration completed May 15 01:17:28.940659 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 01:17:28.941562 systemd-networkd[1672]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 01:17:28.947093 systemd[1]: Reached target network.target - Network. May 15 01:17:28.988549 ignition[1668]: Ignition 2.20.0 May 15 01:17:28.988556 ignition[1668]: Stage: fetch-offline May 15 01:17:28.993235 systemd-networkd[1672]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 01:17:28.988599 ignition[1668]: no configs at "/usr/lib/ignition/base.d" May 15 01:17:28.999295 unknown[1668]: fetched base config from "system" May 15 01:17:28.988608 ignition[1668]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 01:17:28.999302 unknown[1668]: fetched user config from "system" May 15 01:17:28.988768 ignition[1668]: parsed url from cmdline: "" May 15 01:17:29.001796 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 01:17:28.988771 ignition[1668]: no config URL provided May 15 01:17:29.014539 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 01:17:28.988775 ignition[1668]: reading system config file "/usr/lib/ignition/user.ign" May 15 01:17:29.024031 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 01:17:28.988824 ignition[1668]: parsing config with SHA512: f9d063c24dab510aad961f75ebbab06a35acb5b6d57c2e5f5a2fe29303ab995b8be12dedfbc371ba1c3d892e7858541a31ab014352ff487516c46eb29ec5a92b May 15 01:17:29.044727 systemd-networkd[1672]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 01:17:28.999760 ignition[1668]: fetch-offline: fetch-offline passed May 15 01:17:28.999764 ignition[1668]: POST message to Packet Timeline May 15 01:17:28.999769 ignition[1668]: POST Status error: resource requires networking May 15 01:17:28.999835 ignition[1668]: Ignition finished successfully May 15 01:17:29.040070 ignition[1704]: Ignition 2.20.0 May 15 01:17:29.040077 ignition[1704]: Stage: kargs May 15 01:17:29.040331 ignition[1704]: no configs at "/usr/lib/ignition/base.d" May 15 01:17:29.040340 ignition[1704]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 01:17:29.041930 ignition[1704]: kargs: kargs passed May 15 01:17:29.041946 ignition[1704]: POST message to Packet Timeline May 15 01:17:29.042156 ignition[1704]: GET https://metadata.packet.net/metadata: attempt #1 May 15 01:17:29.045213 ignition[1704]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40227->[::1]:53: read: connection refused May 15 01:17:29.245797 ignition[1704]: GET https://metadata.packet.net/metadata: attempt #2 May 15 01:17:29.246228 ignition[1704]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53624->[::1]:53: read: connection refused May 15 01:17:29.599868 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 15 01:17:29.603292 systemd-networkd[1672]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 01:17:29.646983 ignition[1704]: GET https://metadata.packet.net/metadata: attempt #3 May 15 01:17:29.647372 ignition[1704]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40047->[::1]:53: read: connection refused May 15 01:17:30.167866 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 15 01:17:30.170594 systemd-networkd[1672]: eno1: Link UP May 15 01:17:30.170729 systemd-networkd[1672]: eno2: Link UP May 15 01:17:30.170848 systemd-networkd[1672]: enP1p1s0f0np0: Link UP May 15 01:17:30.171075 systemd-networkd[1672]: enP1p1s0f0np0: Gained carrier May 15 01:17:30.183071 systemd-networkd[1672]: enP1p1s0f1np1: Link UP May 15 01:17:30.226898 systemd-networkd[1672]: enP1p1s0f0np0: DHCPv4 address 147.28.151.170/30, gateway 147.28.151.169 acquired from 147.28.144.140 May 15 01:17:30.447727 ignition[1704]: GET https://metadata.packet.net/metadata: attempt #4 May 15 01:17:30.448241 ignition[1704]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50605->[::1]:53: read: connection refused May 15 01:17:30.605058 systemd-networkd[1672]: enP1p1s0f1np1: Gained carrier May 15 01:17:31.205089 systemd-networkd[1672]: enP1p1s0f0np0: Gained IPv6LL May 15 01:17:32.049179 ignition[1704]: GET https://metadata.packet.net/metadata: attempt #5 May 15 01:17:32.049843 ignition[1704]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46626->[::1]:53: read: connection refused May 15 01:17:32.485102 systemd-networkd[1672]: enP1p1s0f1np1: Gained IPv6LL May 15 01:17:35.253279 ignition[1704]: GET https://metadata.packet.net/metadata: attempt #6 May 15 01:17:36.231124 ignition[1704]: GET result: OK May 15 01:17:36.535696 ignition[1704]: Ignition finished successfully May 15 01:17:36.538157 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 01:17:36.552014 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 01:17:36.567036 ignition[1725]: Ignition 2.20.0 May 15 01:17:36.567043 ignition[1725]: Stage: disks May 15 01:17:36.567194 ignition[1725]: no configs at "/usr/lib/ignition/base.d" May 15 01:17:36.567203 ignition[1725]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 01:17:36.568019 ignition[1725]: disks: disks passed May 15 01:17:36.568024 ignition[1725]: POST message to Packet Timeline May 15 01:17:36.568040 ignition[1725]: GET https://metadata.packet.net/metadata: attempt #1 May 15 01:17:37.114044 ignition[1725]: GET result: OK May 15 01:17:38.194148 ignition[1725]: Ignition finished successfully May 15 01:17:38.197113 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 01:17:38.202651 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 01:17:38.209852 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 01:17:38.218038 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 01:17:38.226289 systemd[1]: Reached target sysinit.target - System Initialization. May 15 01:17:38.234958 systemd[1]: Reached target basic.target - Basic System. May 15 01:17:38.254999 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 01:17:38.270756 systemd-fsck[1744]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 01:17:38.273978 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 01:17:38.291952 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 01:17:38.360860 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 15 01:17:38.361189 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 01:17:38.371271 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 01:17:38.392929 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 01:17:38.485419 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1756) May 15 01:17:38.485449 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 01:17:38.485475 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 15 01:17:38.485495 kernel: BTRFS info (device nvme0n1p6): using free space tree May 15 01:17:38.485514 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 15 01:17:38.485533 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 15 01:17:38.399116 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 01:17:38.491708 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 01:17:38.502462 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 15 01:17:38.518353 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 01:17:38.518402 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 01:17:38.550655 coreos-metadata[1775]: May 15 01:17:38.548 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 01:17:38.567253 coreos-metadata[1774]: May 15 01:17:38.548 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 01:17:38.531365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 01:17:38.545289 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 01:17:38.568956 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 01:17:38.602000 initrd-setup-root[1797]: cut: /sysroot/etc/passwd: No such file or directory May 15 01:17:38.607924 initrd-setup-root[1805]: cut: /sysroot/etc/group: No such file or directory May 15 01:17:38.614099 initrd-setup-root[1813]: cut: /sysroot/etc/shadow: No such file or directory May 15 01:17:38.620324 initrd-setup-root[1821]: cut: /sysroot/etc/gshadow: No such file or directory May 15 01:17:38.690035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 01:17:38.711933 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 01:17:38.719861 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 01:17:38.743120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 01:17:38.749688 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 01:17:38.769111 ignition[1891]: INFO : Ignition 2.20.0 May 15 01:17:38.769111 ignition[1891]: INFO : Stage: mount May 15 01:17:38.785549 ignition[1891]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 01:17:38.785549 ignition[1891]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 01:17:38.785549 ignition[1891]: INFO : mount: mount passed May 15 01:17:38.785549 ignition[1891]: INFO : POST message to Packet Timeline May 15 01:17:38.785549 ignition[1891]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 15 01:17:38.773086 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 01:17:39.161455 coreos-metadata[1774]: May 15 01:17:39.161 INFO Fetch successful May 15 01:17:39.166237 coreos-metadata[1775]: May 15 01:17:39.163 INFO Fetch successful May 15 01:17:39.206932 coreos-metadata[1774]: May 15 01:17:39.206 INFO wrote hostname ci-4230.1.1-n-c9ea1a9895 to /sysroot/etc/hostname May 15 01:17:39.210197 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 01:17:39.221140 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 15 01:17:39.221229 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 15 01:17:39.290911 ignition[1891]: INFO : GET result: OK May 15 01:17:39.559916 ignition[1891]: INFO : Ignition finished successfully May 15 01:17:39.562205 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 01:17:39.579924 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 01:17:39.592056 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 01:17:39.627527 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1917) May 15 01:17:39.627562 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 01:17:39.641730 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 15 01:17:39.654569 kernel: BTRFS info (device nvme0n1p6): using free space tree May 15 01:17:39.677209 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 15 01:17:39.677230 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 15 01:17:39.685323 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 01:17:39.716645 ignition[1935]: INFO : Ignition 2.20.0 May 15 01:17:39.716645 ignition[1935]: INFO : Stage: files May 15 01:17:39.726065 ignition[1935]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 01:17:39.726065 ignition[1935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 01:17:39.726065 ignition[1935]: DEBUG : files: compiled without relabeling support, skipping May 15 01:17:39.726065 ignition[1935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 01:17:39.726065 ignition[1935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 01:17:39.726065 ignition[1935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 01:17:39.726065 ignition[1935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 01:17:39.726065 ignition[1935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 01:17:39.726065 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 01:17:39.726065 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 15 01:17:39.722431 unknown[1935]: wrote ssh authorized keys file for user: core May 15 01:17:39.817586 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 01:17:41.355554 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 01:17:41.366114 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 01:17:41.546219 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 01:17:41.955558 ignition[1935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 01:17:41.967835 ignition[1935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 01:17:41.967835 ignition[1935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 01:17:41.967835 ignition[1935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 01:17:41.967835 ignition[1935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 01:17:41.967835 ignition[1935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 01:17:41.967835 ignition[1935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 01:17:41.967835 ignition[1935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 01:17:41.967835 ignition[1935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 01:17:41.967835 ignition[1935]: INFO : files: files passed May 15 01:17:41.967835 ignition[1935]: INFO : POST message to Packet Timeline May 15 01:17:41.967835 ignition[1935]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 15 01:17:42.515795 ignition[1935]: INFO : GET result: OK May 15 01:17:42.916539 ignition[1935]: INFO : Ignition finished successfully May 15 01:17:42.920047 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 01:17:42.935025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 01:17:42.947550 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 01:17:42.966341 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 01:17:42.966529 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 01:17:42.984799 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 01:17:42.984799 initrd-setup-root-after-ignition[1981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 01:17:42.979223 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 01:17:43.031249 initrd-setup-root-after-ignition[1985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 01:17:42.992330 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 01:17:43.021048 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 01:17:43.063618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 01:17:43.063823 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 01:17:43.073583 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 01:17:43.084048 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 01:17:43.101006 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 01:17:43.110047 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 01:17:43.134007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 01:17:43.153979 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 01:17:43.176939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 01:17:43.188634 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 01:17:43.194423 systemd[1]: Stopped target timers.target - Timer Units. May 15 01:17:43.205881 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 01:17:43.205985 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 01:17:43.217422 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 01:17:43.228530 systemd[1]: Stopped target basic.target - Basic System. May 15 01:17:43.239830 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 01:17:43.251057 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 01:17:43.262166 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 01:17:43.273421 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 01:17:43.284540 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 01:17:43.295744 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 01:17:43.306890 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 01:17:43.323417 systemd[1]: Stopped target swap.target - Swaps. May 15 01:17:43.334716 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 01:17:43.334826 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 01:17:43.346231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 01:17:43.357223 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 01:17:43.368146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 01:17:43.371880 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 01:17:43.379304 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 01:17:43.379426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 01:17:43.390583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 01:17:43.390691 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 01:17:43.401810 systemd[1]: Stopped target paths.target - Path Units. May 15 01:17:43.412882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 01:17:43.416888 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 01:17:43.430014 systemd[1]: Stopped target slices.target - Slice Units. May 15 01:17:43.441382 systemd[1]: Stopped target sockets.target - Socket Units. May 15 01:17:43.452829 systemd[1]: iscsid.socket: Deactivated successfully. May 15 01:17:43.452925 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 01:17:43.551671 ignition[2007]: INFO : Ignition 2.20.0 May 15 01:17:43.551671 ignition[2007]: INFO : Stage: umount May 15 01:17:43.551671 ignition[2007]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 01:17:43.551671 ignition[2007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 01:17:43.551671 ignition[2007]: INFO : umount: umount passed May 15 01:17:43.551671 ignition[2007]: INFO : POST message to Packet Timeline May 15 01:17:43.551671 ignition[2007]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 15 01:17:43.464267 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 01:17:43.464380 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 01:17:43.475815 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 01:17:43.475923 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 01:17:43.487300 systemd[1]: ignition-files.service: Deactivated successfully. May 15 01:17:43.487382 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 01:17:43.498950 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 01:17:43.499059 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 01:17:43.526036 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 01:17:43.534301 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 01:17:43.545898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 01:17:43.546006 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 01:17:43.557803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 01:17:43.557897 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 01:17:43.571447 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 01:17:43.572270 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 01:17:43.572358 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 01:17:43.582295 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 01:17:43.582372 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 01:17:44.620535 ignition[2007]: INFO : GET result: OK May 15 01:17:44.913917 ignition[2007]: INFO : Ignition finished successfully May 15 01:17:44.917013 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 01:17:44.917221 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 01:17:44.923934 systemd[1]: Stopped target network.target - Network. May 15 01:17:44.932833 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 01:17:44.932905 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 01:17:44.942284 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 01:17:44.942329 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 01:17:44.951927 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 01:17:44.951955 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 01:17:44.961287 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 01:17:44.961328 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 01:17:44.970828 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 01:17:44.970858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 01:17:44.980875 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 01:17:44.990331 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 01:17:45.000429 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 01:17:45.000556 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 01:17:45.013991 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 01:17:45.014852 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 01:17:45.015016 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 01:17:45.027119 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 01:17:45.027382 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 01:17:45.027489 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 01:17:45.035119 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 01:17:45.036300 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 01:17:45.036481 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 01:17:45.052949 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 01:17:45.059539 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 01:17:45.059592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 01:17:45.069669 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 01:17:45.069704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 01:17:45.079842 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 01:17:45.079901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 01:17:45.095093 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 01:17:45.106824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 01:17:45.107227 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 01:17:45.107374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 01:17:45.133534 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 01:17:45.133734 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 01:17:45.143440 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 01:17:45.143486 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 01:17:45.154216 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 01:17:45.154255 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 01:17:45.175875 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 01:17:45.175915 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 01:17:45.186975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 01:17:45.187021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 01:17:45.204994 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 01:17:45.220910 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 01:17:45.220977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 01:17:45.232266 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 01:17:45.232305 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 01:17:45.243402 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 01:17:45.243436 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 01:17:45.255271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 01:17:45.255304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:17:45.268431 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 01:17:45.268493 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 01:17:45.268781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 01:17:45.268885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 01:17:45.746739 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 01:17:45.746921 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 01:17:45.758239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 01:17:45.781007 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 01:17:45.794611 systemd[1]: Switching root. May 15 01:17:45.850246 systemd-journald[902]: Journal stopped May 15 01:17:47.903073 systemd-journald[902]: Received SIGTERM from PID 1 (systemd). May 15 01:17:47.903100 kernel: SELinux: policy capability network_peer_controls=1 May 15 01:17:47.903110 kernel: SELinux: policy capability open_perms=1 May 15 01:17:47.903118 kernel: SELinux: policy capability extended_socket_class=1 May 15 01:17:47.903126 kernel: SELinux: policy capability always_check_network=0 May 15 01:17:47.903133 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 01:17:47.903142 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 01:17:47.903151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 01:17:47.903159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 01:17:47.903167 kernel: audit: type=1403 audit(1747271866.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 01:17:47.903176 systemd[1]: Successfully loaded SELinux policy in 115.464ms. May 15 01:17:47.903185 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.799ms. May 15 01:17:47.903195 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 01:17:47.903205 systemd[1]: Detected architecture arm64. May 15 01:17:47.903215 systemd[1]: Detected first boot. May 15 01:17:47.903224 systemd[1]: Hostname set to . May 15 01:17:47.903233 systemd[1]: Initializing machine ID from random generator. May 15 01:17:47.903242 zram_generator::config[2087]: No configuration found. May 15 01:17:47.903253 systemd[1]: Populated /etc with preset unit settings. May 15 01:17:47.903263 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 01:17:47.903272 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 01:17:47.903281 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 01:17:47.903290 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 01:17:47.903299 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 01:17:47.903308 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 01:17:47.903318 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 01:17:47.903328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 01:17:47.903337 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 01:17:47.903346 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 01:17:47.903356 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 01:17:47.903365 systemd[1]: Created slice user.slice - User and Session Slice. May 15 01:17:47.903374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 01:17:47.903383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 01:17:47.903393 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 01:17:47.903403 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 01:17:47.903412 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 01:17:47.903423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 01:17:47.903433 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 01:17:47.903442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 01:17:47.903451 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 01:17:47.903462 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 01:17:47.903471 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 01:17:47.903481 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 01:17:47.903491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 01:17:47.903500 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 01:17:47.903509 systemd[1]: Reached target slices.target - Slice Units. May 15 01:17:47.903518 systemd[1]: Reached target swap.target - Swaps. May 15 01:17:47.903527 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 01:17:47.903537 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 01:17:47.903547 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 01:17:47.903557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 01:17:47.903567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 01:17:47.903576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 01:17:47.903585 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 01:17:47.903596 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 01:17:47.903605 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 01:17:47.903615 systemd[1]: Mounting media.mount - External Media Directory... May 15 01:17:47.903624 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 01:17:47.903633 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 01:17:47.903643 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 01:17:47.903652 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 01:17:47.903662 systemd[1]: Reached target machines.target - Containers. May 15 01:17:47.903672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 01:17:47.903682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 01:17:47.903691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 01:17:47.903700 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 01:17:47.903710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 01:17:47.903719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 01:17:47.903728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 01:17:47.903737 kernel: ACPI: bus type drm_connector registered May 15 01:17:47.903746 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 01:17:47.903756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 01:17:47.903765 kernel: fuse: init (API version 7.39) May 15 01:17:47.903774 kernel: loop: module loaded May 15 01:17:47.903782 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 01:17:47.903792 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 01:17:47.903801 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 01:17:47.903810 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 01:17:47.903821 systemd[1]: Stopped systemd-fsck-usr.service. May 15 01:17:47.903832 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 01:17:47.903842 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 01:17:47.903851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 01:17:47.903946 systemd-journald[2194]: Collecting audit messages is disabled. May 15 01:17:47.903970 systemd-journald[2194]: Journal started May 15 01:17:47.903989 systemd-journald[2194]: Runtime Journal (/run/log/journal/63dc748d39424b88b89aa4dc1d88343e) is 8M, max 4G, 3.9G free. May 15 01:17:46.580129 systemd[1]: Queued start job for default target multi-user.target. May 15 01:17:46.592246 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 15 01:17:46.592562 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 01:17:46.592877 systemd[1]: systemd-journald.service: Consumed 3.448s CPU time. May 15 01:17:47.930869 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 01:17:47.957867 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 01:17:47.985869 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 01:17:48.006865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 01:17:48.029591 systemd[1]: verity-setup.service: Deactivated successfully. May 15 01:17:48.029626 systemd[1]: Stopped verity-setup.service. May 15 01:17:48.054876 systemd[1]: Started systemd-journald.service - Journal Service. May 15 01:17:48.060733 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 01:17:48.066158 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 01:17:48.071544 systemd[1]: Mounted media.mount - External Media Directory. May 15 01:17:48.076893 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 01:17:48.082291 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 01:17:48.087571 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 01:17:48.094889 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 01:17:48.100354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 01:17:48.106961 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 01:17:48.107122 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 01:17:48.112424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:17:48.112581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 01:17:48.117918 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 01:17:48.118075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 01:17:48.123388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:17:48.124889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 01:17:48.130211 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 01:17:48.130377 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 01:17:48.135604 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:17:48.135753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 01:17:48.140937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 01:17:48.145985 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 01:17:48.151023 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 01:17:48.156127 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 01:17:48.161329 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 01:17:48.176517 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 01:17:48.198022 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 01:17:48.203904 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 01:17:48.208642 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 01:17:48.208673 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 01:17:48.214199 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 01:17:48.219721 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 01:17:48.225459 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 01:17:48.230181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 01:17:48.231648 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 01:17:48.237176 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 01:17:48.242944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:17:48.244014 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 01:17:48.248202 systemd-journald[2194]: Time spent on flushing to /var/log/journal/63dc748d39424b88b89aa4dc1d88343e is 24.820ms for 2362 entries. May 15 01:17:48.248202 systemd-journald[2194]: System Journal (/var/log/journal/63dc748d39424b88b89aa4dc1d88343e) is 8M, max 195.6M, 187.6M free. May 15 01:17:48.286494 systemd-journald[2194]: Received client request to flush runtime journal. May 15 01:17:48.286599 kernel: loop0: detected capacity change from 0 to 113512 May 15 01:17:48.260675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 01:17:48.261880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 01:17:48.267472 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 01:17:48.273296 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 01:17:48.279048 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 01:17:48.296145 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 01:17:48.298289 systemd-tmpfiles[2243]: ACLs are not supported, ignoring. May 15 01:17:48.298302 systemd-tmpfiles[2243]: ACLs are not supported, ignoring. May 15 01:17:48.305896 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 01:17:48.309477 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 01:17:48.315680 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 01:17:48.320351 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 01:17:48.325008 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 01:17:48.329885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 01:17:48.334685 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 01:17:48.345126 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 01:17:48.347862 kernel: loop1: detected capacity change from 0 to 201592 May 15 01:17:48.371203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 01:17:48.377293 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 01:17:48.382915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 01:17:48.384774 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 01:17:48.391285 udevadm[2245]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 01:17:48.400485 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 01:17:48.417862 kernel: loop2: detected capacity change from 0 to 8 May 15 01:17:48.424033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 01:17:48.436547 systemd-tmpfiles[2280]: ACLs are not supported, ignoring. May 15 01:17:48.436560 systemd-tmpfiles[2280]: ACLs are not supported, ignoring. May 15 01:17:48.440204 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 01:17:48.476865 kernel: loop3: detected capacity change from 0 to 123192 May 15 01:17:48.491850 ldconfig[2229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 01:17:48.493515 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 01:17:48.527871 kernel: loop4: detected capacity change from 0 to 113512 May 15 01:17:48.543864 kernel: loop5: detected capacity change from 0 to 201592 May 15 01:17:48.561863 kernel: loop6: detected capacity change from 0 to 8 May 15 01:17:48.573864 kernel: loop7: detected capacity change from 0 to 123192 May 15 01:17:48.577845 (sd-merge)[2294]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 15 01:17:48.578288 (sd-merge)[2294]: Merged extensions into '/usr'. May 15 01:17:48.581241 systemd[1]: Reload requested from client PID 2240 ('systemd-sysext') (unit systemd-sysext.service)... May 15 01:17:48.581252 systemd[1]: Reloading... May 15 01:17:48.631862 zram_generator::config[2321]: No configuration found. May 15 01:17:48.724446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:17:48.785497 systemd[1]: Reloading finished in 203 ms. May 15 01:17:48.803249 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 01:17:48.808030 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 01:17:48.827209 systemd[1]: Starting ensure-sysext.service... May 15 01:17:48.832978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 01:17:48.839574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 01:17:48.850262 systemd[1]: Reload requested from client PID 2374 ('systemctl') (unit ensure-sysext.service)... May 15 01:17:48.850273 systemd[1]: Reloading... May 15 01:17:48.852594 systemd-tmpfiles[2375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 01:17:48.852787 systemd-tmpfiles[2375]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 01:17:48.853439 systemd-tmpfiles[2375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 01:17:48.853636 systemd-tmpfiles[2375]: ACLs are not supported, ignoring. May 15 01:17:48.853682 systemd-tmpfiles[2375]: ACLs are not supported, ignoring. May 15 01:17:48.856716 systemd-tmpfiles[2375]: Detected autofs mount point /boot during canonicalization of boot. May 15 01:17:48.856723 systemd-tmpfiles[2375]: Skipping /boot May 15 01:17:48.865200 systemd-tmpfiles[2375]: Detected autofs mount point /boot during canonicalization of boot. May 15 01:17:48.865208 systemd-tmpfiles[2375]: Skipping /boot May 15 01:17:48.866791 systemd-udevd[2376]: Using default interface naming scheme 'v255'. May 15 01:17:48.898863 zram_generator::config[2417]: No configuration found. May 15 01:17:48.924866 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (2435) May 15 01:17:48.942867 kernel: IPMI message handler: version 39.2 May 15 01:17:48.952879 kernel: ipmi device interface May 15 01:17:48.964873 kernel: ipmi_ssif: IPMI SSIF Interface driver May 15 01:17:48.964914 kernel: ipmi_si: IPMI System Interface driver May 15 01:17:48.978258 kernel: ipmi_si: Unable to find any System Interface(s) May 15 01:17:49.014101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:17:49.093980 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 01:17:49.094362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 15 01:17:49.098848 systemd[1]: Reloading finished in 248 ms. May 15 01:17:49.110270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 01:17:49.130618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 01:17:49.148837 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 01:17:49.160328 systemd[1]: Finished ensure-sysext.service. May 15 01:17:49.191995 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 01:17:49.197888 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 01:17:49.202909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 01:17:49.203987 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 01:17:49.209736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 01:17:49.215441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 01:17:49.220992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 01:17:49.221030 lvm[2583]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 01:17:49.226558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 01:17:49.231217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 01:17:49.232175 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 01:17:49.232887 augenrules[2605]: No rules May 15 01:17:49.236716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 01:17:49.237914 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 01:17:49.244222 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 01:17:49.250611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 01:17:49.256645 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 01:17:49.262084 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 01:17:49.267543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 01:17:49.272945 systemd[1]: audit-rules.service: Deactivated successfully. May 15 01:17:49.273126 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 01:17:49.277964 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 01:17:49.284135 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 01:17:49.289632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:17:49.289798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 01:17:49.294504 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 01:17:49.294650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 01:17:49.299536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:17:49.299688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 01:17:49.304563 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:17:49.304710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 01:17:49.310050 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 01:17:49.314895 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 01:17:49.319522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 01:17:49.333902 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 01:17:49.338904 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 01:17:49.359023 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 01:17:49.362831 lvm[2636]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 01:17:49.363430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:17:49.363498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 01:17:49.364677 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 01:17:49.371141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 01:17:49.375750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 01:17:49.377638 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 01:17:49.393257 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 01:17:49.407202 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 01:17:49.471197 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 01:17:49.476019 systemd[1]: Reached target time-set.target - System Time Set. May 15 01:17:49.476128 systemd-resolved[2614]: Positive Trust Anchors: May 15 01:17:49.476142 systemd-resolved[2614]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 01:17:49.476175 systemd-resolved[2614]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 01:17:49.480052 systemd-resolved[2614]: Using system hostname 'ci-4230.1.1-n-c9ea1a9895'. May 15 01:17:49.481420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 01:17:49.484055 systemd-networkd[2613]: lo: Link UP May 15 01:17:49.484061 systemd-networkd[2613]: lo: Gained carrier May 15 01:17:49.486555 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 01:17:49.487840 systemd-networkd[2613]: bond0: netdev ready May 15 01:17:49.490798 systemd[1]: Reached target sysinit.target - System Initialization. May 15 01:17:49.495041 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 01:17:49.496989 systemd-networkd[2613]: Enumeration completed May 15 01:17:49.499429 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 01:17:49.503814 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 01:17:49.504619 systemd-networkd[2613]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:4a:0a:00.network. May 15 01:17:49.508087 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 01:17:49.512374 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 01:17:49.516684 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 01:17:49.516705 systemd[1]: Reached target paths.target - Path Units. May 15 01:17:49.521027 systemd[1]: Reached target timers.target - Timer Units. May 15 01:17:49.526036 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 01:17:49.531762 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 01:17:49.537929 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 01:17:49.546024 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 01:17:49.550898 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 01:17:49.555837 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 01:17:49.560484 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 01:17:49.564955 systemd[1]: Reached target network.target - Network. May 15 01:17:49.569268 systemd[1]: Reached target sockets.target - Socket Units. May 15 01:17:49.573520 systemd[1]: Reached target basic.target - Basic System. May 15 01:17:49.577717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 01:17:49.577739 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 01:17:49.594951 systemd[1]: Starting containerd.service - containerd container runtime... May 15 01:17:49.600477 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 01:17:49.606024 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 01:17:49.611528 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 01:17:49.617089 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 01:17:49.621394 jq[2666]: false May 15 01:17:49.621493 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 01:17:49.621699 coreos-metadata[2662]: May 15 01:17:49.621 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 01:17:49.622599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 01:17:49.624133 coreos-metadata[2662]: May 15 01:17:49.624 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 01:17:49.627136 dbus-daemon[2663]: [system] SELinux support is enabled May 15 01:17:49.628100 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 01:17:49.633687 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 01:17:49.636709 extend-filesystems[2668]: Found loop4 May 15 01:17:49.642839 extend-filesystems[2668]: Found loop5 May 15 01:17:49.642839 extend-filesystems[2668]: Found loop6 May 15 01:17:49.642839 extend-filesystems[2668]: Found loop7 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme1n1 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p1 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p2 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p3 May 15 01:17:49.642839 extend-filesystems[2668]: Found usr May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p4 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p6 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p7 May 15 01:17:49.642839 extend-filesystems[2668]: Found nvme0n1p9 May 15 01:17:49.642839 extend-filesystems[2668]: Checking size of /dev/nvme0n1p9 May 15 01:17:49.785618 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 15 01:17:49.785643 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (2426) May 15 01:17:49.639368 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 01:17:49.785716 extend-filesystems[2668]: Resized partition /dev/nvme0n1p9 May 15 01:17:49.779954 dbus-daemon[2663]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 01:17:49.651327 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 01:17:49.790500 extend-filesystems[2689]: resize2fs 1.47.1 (20-May-2024) May 15 01:17:49.657551 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 01:17:49.697621 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 01:17:49.706650 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 01:17:49.707238 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 01:17:49.795722 update_engine[2696]: I20250515 01:17:49.759063 2696 main.cc:92] Flatcar Update Engine starting May 15 01:17:49.795722 update_engine[2696]: I20250515 01:17:49.764130 2696 update_check_scheduler.cc:74] Next update check in 5m47s May 15 01:17:49.707848 systemd[1]: Starting update-engine.service - Update Engine... May 15 01:17:49.795989 jq[2697]: true May 15 01:17:49.716140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 01:17:49.724804 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 01:17:49.796258 tar[2701]: linux-arm64/LICENSE May 15 01:17:49.796258 tar[2701]: linux-arm64/helm May 15 01:17:49.738074 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 01:17:49.796501 jq[2702]: true May 15 01:17:49.738266 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 01:17:49.738600 systemd[1]: motdgen.service: Deactivated successfully. May 15 01:17:49.738762 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 01:17:49.742767 systemd-logind[2688]: Watching system buttons on /dev/input/event0 (Power Button) May 15 01:17:49.743162 systemd-logind[2688]: New seat seat0. May 15 01:17:49.748866 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 01:17:49.750882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 01:17:49.763167 systemd[1]: Started systemd-logind.service - User Login Management. May 15 01:17:49.768017 (ntainerd)[2704]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 01:17:49.799494 systemd[1]: Started update-engine.service - Update Engine. May 15 01:17:49.805061 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 01:17:49.805218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 01:17:49.809998 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 01:17:49.810107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 01:17:49.814056 bash[2727]: Updated "/home/core/.ssh/authorized_keys" May 15 01:17:49.833068 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 01:17:49.841714 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 01:17:49.848921 systemd[1]: Starting sshkeys.service... May 15 01:17:49.861227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 01:17:49.861801 locksmithd[2730]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 01:17:49.867146 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 01:17:49.886201 coreos-metadata[2747]: May 15 01:17:49.886 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 01:17:49.887407 coreos-metadata[2747]: May 15 01:17:49.887 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 01:17:49.911273 containerd[2704]: time="2025-05-15T01:17:49.911186840Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 01:17:49.933534 containerd[2704]: time="2025-05-15T01:17:49.933492120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.934877 containerd[2704]: time="2025-05-15T01:17:49.934845960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 01:17:49.934897 containerd[2704]: time="2025-05-15T01:17:49.934878720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 01:17:49.934915 containerd[2704]: time="2025-05-15T01:17:49.934895000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 01:17:49.935077 containerd[2704]: time="2025-05-15T01:17:49.935061120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 01:17:49.935097 containerd[2704]: time="2025-05-15T01:17:49.935085320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.935154 containerd[2704]: time="2025-05-15T01:17:49.935139040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 01:17:49.935174 containerd[2704]: time="2025-05-15T01:17:49.935151800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.935354 containerd[2704]: time="2025-05-15T01:17:49.935337720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 01:17:49.935379 containerd[2704]: time="2025-05-15T01:17:49.935353240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.935379 containerd[2704]: time="2025-05-15T01:17:49.935366400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 01:17:49.935379 containerd[2704]: time="2025-05-15T01:17:49.935375000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.935460 containerd[2704]: time="2025-05-15T01:17:49.935448080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.935789 containerd[2704]: time="2025-05-15T01:17:49.935773200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 01:17:49.935933 containerd[2704]: time="2025-05-15T01:17:49.935917800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 01:17:49.935933 containerd[2704]: time="2025-05-15T01:17:49.935931240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 01:17:49.936026 containerd[2704]: time="2025-05-15T01:17:49.936013000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 01:17:49.936068 containerd[2704]: time="2025-05-15T01:17:49.936056600Z" level=info msg="metadata content store policy set" policy=shared May 15 01:17:49.942660 containerd[2704]: time="2025-05-15T01:17:49.942637880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 01:17:49.942700 containerd[2704]: time="2025-05-15T01:17:49.942678200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 01:17:49.942700 containerd[2704]: time="2025-05-15T01:17:49.942693760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 01:17:49.942763 containerd[2704]: time="2025-05-15T01:17:49.942709480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 01:17:49.942763 containerd[2704]: time="2025-05-15T01:17:49.942722400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 01:17:49.942871 containerd[2704]: time="2025-05-15T01:17:49.942847800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 01:17:49.943081 containerd[2704]: time="2025-05-15T01:17:49.943067040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 01:17:49.943185 containerd[2704]: time="2025-05-15T01:17:49.943171640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 01:17:49.943204 containerd[2704]: time="2025-05-15T01:17:49.943187640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 01:17:49.943204 containerd[2704]: time="2025-05-15T01:17:49.943200880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 01:17:49.943237 containerd[2704]: time="2025-05-15T01:17:49.943217480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943237 containerd[2704]: time="2025-05-15T01:17:49.943230920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943270 containerd[2704]: time="2025-05-15T01:17:49.943243240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943270 containerd[2704]: time="2025-05-15T01:17:49.943256320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943312 containerd[2704]: time="2025-05-15T01:17:49.943270520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943312 containerd[2704]: time="2025-05-15T01:17:49.943283760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943312 containerd[2704]: time="2025-05-15T01:17:49.943295880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943312 containerd[2704]: time="2025-05-15T01:17:49.943307000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 01:17:49.943374 containerd[2704]: time="2025-05-15T01:17:49.943325440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943374 containerd[2704]: time="2025-05-15T01:17:49.943338680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943374 containerd[2704]: time="2025-05-15T01:17:49.943350920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943374 containerd[2704]: time="2025-05-15T01:17:49.943366280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943445 containerd[2704]: time="2025-05-15T01:17:49.943378240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943445 containerd[2704]: time="2025-05-15T01:17:49.943390520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943445 containerd[2704]: time="2025-05-15T01:17:49.943401320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943445 containerd[2704]: time="2025-05-15T01:17:49.943413160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943445 containerd[2704]: time="2025-05-15T01:17:49.943425360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943445 containerd[2704]: time="2025-05-15T01:17:49.943439000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943449720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943461360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943472400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943485480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943504880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943520840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943541 containerd[2704]: time="2025-05-15T01:17:49.943530920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 01:17:49.943711 containerd[2704]: time="2025-05-15T01:17:49.943699800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 01:17:49.943735 containerd[2704]: time="2025-05-15T01:17:49.943715800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 01:17:49.943735 containerd[2704]: time="2025-05-15T01:17:49.943725360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 01:17:49.943773 containerd[2704]: time="2025-05-15T01:17:49.943737960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 01:17:49.943773 containerd[2704]: time="2025-05-15T01:17:49.943746360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 01:17:49.943773 containerd[2704]: time="2025-05-15T01:17:49.943757080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 01:17:49.943773 containerd[2704]: time="2025-05-15T01:17:49.943770440Z" level=info msg="NRI interface is disabled by configuration." May 15 01:17:49.943839 containerd[2704]: time="2025-05-15T01:17:49.943780320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 01:17:49.944081 containerd[2704]: time="2025-05-15T01:17:49.944046240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 01:17:49.944180 containerd[2704]: time="2025-05-15T01:17:49.944089480Z" level=info msg="Connect containerd service" May 15 01:17:49.944180 containerd[2704]: time="2025-05-15T01:17:49.944118040Z" level=info msg="using legacy CRI server" May 15 01:17:49.944180 containerd[2704]: time="2025-05-15T01:17:49.944124680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 01:17:49.944362 containerd[2704]: time="2025-05-15T01:17:49.944349120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 01:17:49.944970 containerd[2704]: time="2025-05-15T01:17:49.944949320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 01:17:49.945179 containerd[2704]: time="2025-05-15T01:17:49.945136320Z" level=info msg="Start subscribing containerd event" May 15 01:17:49.945206 containerd[2704]: time="2025-05-15T01:17:49.945196400Z" level=info msg="Start recovering state" May 15 01:17:49.945273 containerd[2704]: time="2025-05-15T01:17:49.945261880Z" level=info msg="Start event monitor" May 15 01:17:49.945293 containerd[2704]: time="2025-05-15T01:17:49.945274760Z" level=info msg="Start snapshots syncer" May 15 01:17:49.945293 containerd[2704]: time="2025-05-15T01:17:49.945283520Z" level=info msg="Start cni network conf syncer for default" May 15 01:17:49.945293 containerd[2704]: time="2025-05-15T01:17:49.945291120Z" level=info msg="Start streaming server" May 15 01:17:49.945427 containerd[2704]: time="2025-05-15T01:17:49.945414560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 01:17:49.945460 containerd[2704]: time="2025-05-15T01:17:49.945452080Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 01:17:49.945507 containerd[2704]: time="2025-05-15T01:17:49.945499040Z" level=info msg="containerd successfully booted in 0.035674s" May 15 01:17:49.945551 systemd[1]: Started containerd.service - containerd container runtime. May 15 01:17:50.113679 tar[2701]: linux-arm64/README.md May 15 01:17:50.130888 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 01:17:50.195865 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 15 01:17:50.211890 extend-filesystems[2689]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 15 01:17:50.211890 extend-filesystems[2689]: old_desc_blocks = 1, new_desc_blocks = 112 May 15 01:17:50.211890 extend-filesystems[2689]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 15 01:17:50.239977 extend-filesystems[2668]: Resized filesystem in /dev/nvme0n1p9 May 15 01:17:50.214192 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 01:17:50.214455 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 01:17:50.226725 systemd[1]: extend-filesystems.service: Consumed 208ms CPU time, 69M memory peak. May 15 01:17:50.506870 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 15 01:17:50.522861 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 15 01:17:50.527440 systemd-networkd[2613]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:4a:0a:01.network. May 15 01:17:50.624251 coreos-metadata[2662]: May 15 01:17:50.624 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 15 01:17:50.624684 coreos-metadata[2662]: May 15 01:17:50.624 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 01:17:50.736890 sshd_keygen[2693]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 01:17:50.754920 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 01:17:50.771099 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 01:17:50.782802 systemd[1]: issuegen.service: Deactivated successfully. May 15 01:17:50.783803 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 01:17:50.790312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 01:17:50.802369 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 01:17:50.808771 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 01:17:50.814889 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 01:17:50.819943 systemd[1]: Reached target getty.target - Login Prompts. May 15 01:17:50.887559 coreos-metadata[2747]: May 15 01:17:50.887 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 15 01:17:50.887949 coreos-metadata[2747]: May 15 01:17:50.887 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 01:17:51.089872 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 15 01:17:51.106862 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 15 01:17:51.107332 systemd-networkd[2613]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 15 01:17:51.108521 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 01:17:51.108994 systemd-networkd[2613]: enP1p1s0f0np0: Link UP May 15 01:17:51.109269 systemd-networkd[2613]: enP1p1s0f0np0: Gained carrier May 15 01:17:51.127874 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 15 01:17:51.139185 systemd-networkd[2613]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:4a:0a:00.network. May 15 01:17:51.139459 systemd-networkd[2613]: enP1p1s0f1np1: Link UP May 15 01:17:51.139653 systemd-networkd[2613]: enP1p1s0f1np1: Gained carrier May 15 01:17:51.149145 systemd-networkd[2613]: bond0: Link UP May 15 01:17:51.149434 systemd-networkd[2613]: bond0: Gained carrier May 15 01:17:51.149602 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:51.150193 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:51.150443 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:51.150565 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:51.232097 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 15 01:17:51.232132 kernel: bond0: active interface up! May 15 01:17:51.355870 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 15 01:17:52.261207 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:52.516955 systemd-networkd[2613]: bond0: Gained IPv6LL May 15 01:17:52.517357 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:52.519904 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 01:17:52.526025 systemd[1]: Reached target network-online.target - Network is Online. May 15 01:17:52.547040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:17:52.553568 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 01:17:52.575958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 01:17:52.624806 coreos-metadata[2662]: May 15 01:17:52.624 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 15 01:17:52.888082 coreos-metadata[2747]: May 15 01:17:52.888 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 15 01:17:53.186948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:17:53.192974 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:17:53.586663 kubelet[2812]: E0515 01:17:53.586572 2812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:17:53.588910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:17:53.589048 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:17:53.589352 systemd[1]: kubelet.service: Consumed 702ms CPU time, 261.9M memory peak. May 15 01:17:54.401914 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 15 01:17:54.402224 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 15 01:17:54.770589 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 01:17:54.787195 systemd[1]: Started sshd@0-147.28.151.170:22-139.178.68.195:51686.service - OpenSSH per-connection server daemon (139.178.68.195:51686). May 15 01:17:55.206035 sshd[2839]: Accepted publickey for core from 139.178.68.195 port 51686 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:17:55.207317 sshd-session[2839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:55.216751 systemd-logind[2688]: New session 1 of user core. May 15 01:17:55.218157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 01:17:55.234202 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 01:17:55.246064 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 01:17:55.253144 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 01:17:55.261468 (systemd)[2843]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 01:17:55.263450 systemd-logind[2688]: New session c1 of user core. May 15 01:17:55.301305 coreos-metadata[2747]: May 15 01:17:55.301 INFO Fetch successful May 15 01:17:55.351591 unknown[2747]: wrote ssh authorized keys file for user: core May 15 01:17:55.376986 update-ssh-keys[2850]: Updated "/home/core/.ssh/authorized_keys" May 15 01:17:55.378138 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 01:17:55.383722 systemd[2843]: Queued start job for default target default.target. May 15 01:17:55.392256 systemd[1]: Finished sshkeys.service. May 15 01:17:55.393123 systemd[2843]: Created slice app.slice - User Application Slice. May 15 01:17:55.393148 systemd[2843]: Reached target paths.target - Paths. May 15 01:17:55.393179 systemd[2843]: Reached target timers.target - Timers. May 15 01:17:55.394452 systemd[2843]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 01:17:55.403730 systemd[2843]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 01:17:55.403782 systemd[2843]: Reached target sockets.target - Sockets. May 15 01:17:55.403824 systemd[2843]: Reached target basic.target - Basic System. May 15 01:17:55.403850 systemd[2843]: Reached target default.target - Main User Target. May 15 01:17:55.403882 systemd[2843]: Startup finished in 135ms. May 15 01:17:55.404153 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 01:17:55.416997 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 01:17:55.439514 coreos-metadata[2662]: May 15 01:17:55.439 INFO Fetch successful May 15 01:17:55.506933 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 01:17:55.513478 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 15 01:17:55.726570 systemd[1]: Started sshd@1-147.28.151.170:22-139.178.68.195:58094.service - OpenSSH per-connection server daemon (139.178.68.195:58094). May 15 01:17:55.849601 login[2792]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 15 01:17:55.850925 login[2793]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:55.853974 systemd-logind[2688]: New session 2 of user core. May 15 01:17:55.855174 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 01:17:55.874690 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 15 01:17:55.875197 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 01:17:55.877921 systemd[1]: Startup finished in 11.615s (kernel) + 21.547s (initrd) + 9.973s (userspace) = 43.136s. May 15 01:17:56.145925 sshd[2865]: Accepted publickey for core from 139.178.68.195 port 58094 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:17:56.147028 sshd-session[2865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:56.149954 systemd-logind[2688]: New session 4 of user core. May 15 01:17:56.158963 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 01:17:56.440346 sshd[2883]: Connection closed by 139.178.68.195 port 58094 May 15 01:17:56.440840 sshd-session[2865]: pam_unix(sshd:session): session closed for user core May 15 01:17:56.444509 systemd[1]: sshd@1-147.28.151.170:22-139.178.68.195:58094.service: Deactivated successfully. May 15 01:17:56.446327 systemd[1]: session-4.scope: Deactivated successfully. May 15 01:17:56.447565 systemd-logind[2688]: Session 4 logged out. Waiting for processes to exit. May 15 01:17:56.448179 systemd-logind[2688]: Removed session 4. May 15 01:17:56.521526 systemd[1]: Started sshd@2-147.28.151.170:22-139.178.68.195:58096.service - OpenSSH per-connection server daemon (139.178.68.195:58096). May 15 01:17:56.850373 login[2792]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:56.854489 systemd-logind[2688]: New session 3 of user core. May 15 01:17:56.864026 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 01:17:56.953975 sshd[2889]: Accepted publickey for core from 139.178.68.195 port 58096 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:17:56.955123 sshd-session[2889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:56.957849 systemd-logind[2688]: New session 5 of user core. May 15 01:17:56.969968 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 01:17:57.259014 sshd[2901]: Connection closed by 139.178.68.195 port 58096 May 15 01:17:57.259789 sshd-session[2889]: pam_unix(sshd:session): session closed for user core May 15 01:17:57.263439 systemd[1]: sshd@2-147.28.151.170:22-139.178.68.195:58096.service: Deactivated successfully. May 15 01:17:57.265274 systemd[1]: session-5.scope: Deactivated successfully. May 15 01:17:57.266392 systemd-logind[2688]: Session 5 logged out. Waiting for processes to exit. May 15 01:17:57.266931 systemd-logind[2688]: Removed session 5. May 15 01:17:57.332663 systemd[1]: Started sshd@3-147.28.151.170:22-139.178.68.195:58106.service - OpenSSH per-connection server daemon (139.178.68.195:58106). May 15 01:17:57.760002 sshd[2908]: Accepted publickey for core from 139.178.68.195 port 58106 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:17:57.761006 sshd-session[2908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:57.763963 systemd-logind[2688]: New session 6 of user core. May 15 01:17:57.765378 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:17:57.772980 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 01:17:58.058103 sshd[2910]: Connection closed by 139.178.68.195 port 58106 May 15 01:17:58.058585 sshd-session[2908]: pam_unix(sshd:session): session closed for user core May 15 01:17:58.062234 systemd[1]: sshd@3-147.28.151.170:22-139.178.68.195:58106.service: Deactivated successfully. May 15 01:17:58.064568 systemd[1]: session-6.scope: Deactivated successfully. May 15 01:17:58.065122 systemd-logind[2688]: Session 6 logged out. Waiting for processes to exit. May 15 01:17:58.065671 systemd-logind[2688]: Removed session 6. May 15 01:17:58.129743 systemd[1]: Started sshd@4-147.28.151.170:22-139.178.68.195:58118.service - OpenSSH per-connection server daemon (139.178.68.195:58118). May 15 01:17:58.551198 sshd[2917]: Accepted publickey for core from 139.178.68.195 port 58118 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:17:58.552221 sshd-session[2917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:58.555152 systemd-logind[2688]: New session 7 of user core. May 15 01:17:58.563973 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 01:17:58.802819 sudo[2920]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 01:17:58.803100 sudo[2920]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:17:58.814888 sudo[2920]: pam_unix(sudo:session): session closed for user root May 15 01:17:58.878711 sshd[2919]: Connection closed by 139.178.68.195 port 58118 May 15 01:17:58.879397 sshd-session[2917]: pam_unix(sshd:session): session closed for user core May 15 01:17:58.883225 systemd[1]: sshd@4-147.28.151.170:22-139.178.68.195:58118.service: Deactivated successfully. May 15 01:17:58.886311 systemd[1]: session-7.scope: Deactivated successfully. May 15 01:17:58.886876 systemd-logind[2688]: Session 7 logged out. Waiting for processes to exit. May 15 01:17:58.887479 systemd-logind[2688]: Removed session 7. May 15 01:17:58.952809 systemd[1]: Started sshd@5-147.28.151.170:22-139.178.68.195:58128.service - OpenSSH per-connection server daemon (139.178.68.195:58128). May 15 01:17:59.368003 sshd[2926]: Accepted publickey for core from 139.178.68.195 port 58128 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:17:59.369045 sshd-session[2926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:17:59.371738 systemd-logind[2688]: New session 8 of user core. May 15 01:17:59.380972 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 01:17:59.599172 sudo[2930]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 01:17:59.599443 sudo[2930]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:17:59.602196 sudo[2930]: pam_unix(sudo:session): session closed for user root May 15 01:17:59.606527 sudo[2929]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 01:17:59.606777 sudo[2929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:17:59.621182 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 01:17:59.643214 augenrules[2952]: No rules May 15 01:17:59.644306 systemd[1]: audit-rules.service: Deactivated successfully. May 15 01:17:59.644518 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 01:17:59.645242 sudo[2929]: pam_unix(sudo:session): session closed for user root May 15 01:17:59.706764 sshd[2928]: Connection closed by 139.178.68.195 port 58128 May 15 01:17:59.707129 sshd-session[2926]: pam_unix(sshd:session): session closed for user core May 15 01:17:59.709889 systemd[1]: sshd@5-147.28.151.170:22-139.178.68.195:58128.service: Deactivated successfully. May 15 01:17:59.712373 systemd[1]: session-8.scope: Deactivated successfully. May 15 01:17:59.714296 systemd-logind[2688]: Session 8 logged out. Waiting for processes to exit. May 15 01:17:59.714816 systemd-logind[2688]: Removed session 8. May 15 01:17:59.779601 systemd[1]: Started sshd@6-147.28.151.170:22-139.178.68.195:58140.service - OpenSSH per-connection server daemon (139.178.68.195:58140). May 15 01:18:00.203252 sshd[2961]: Accepted publickey for core from 139.178.68.195 port 58140 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:18:00.204303 sshd-session[2961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:18:00.207018 systemd-logind[2688]: New session 9 of user core. May 15 01:18:00.220966 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 01:18:00.436459 sudo[2964]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 01:18:00.436718 sudo[2964]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 01:18:00.729051 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 01:18:00.729284 (dockerd)[2994]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 01:18:00.942279 dockerd[2994]: time="2025-05-15T01:18:00.942226640Z" level=info msg="Starting up" May 15 01:18:01.021130 dockerd[2994]: time="2025-05-15T01:18:01.021058720Z" level=info msg="Loading containers: start." May 15 01:18:01.154880 kernel: Initializing XFRM netlink socket May 15 01:18:01.172952 systemd-timesyncd[2615]: Network configuration changed, trying to establish connection. May 15 01:18:01.217904 systemd-networkd[2613]: docker0: Link UP May 15 01:18:01.253021 dockerd[2994]: time="2025-05-15T01:18:01.252991280Z" level=info msg="Loading containers: done." May 15 01:18:01.261996 dockerd[2994]: time="2025-05-15T01:18:01.261963840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 01:18:01.262091 dockerd[2994]: time="2025-05-15T01:18:01.262037440Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 01:18:01.262221 dockerd[2994]: time="2025-05-15T01:18:01.262205360Z" level=info msg="Daemon has completed initialization" May 15 01:18:01.282047 dockerd[2994]: time="2025-05-15T01:18:01.281903160Z" level=info msg="API listen on /run/docker.sock" May 15 01:18:01.282015 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 01:18:01.298852 systemd-timesyncd[2615]: Contacted time server [2604:2dc0:101:200::151]:123 (2.flatcar.pool.ntp.org). May 15 01:18:01.298924 systemd-timesyncd[2615]: Initial clock synchronization to Thu 2025-05-15 01:18:01.254060 UTC. May 15 01:18:01.859733 containerd[2704]: time="2025-05-15T01:18:01.859695400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 01:18:01.905590 systemd[1]: Started sshd@7-147.28.151.170:22-103.155.161.90:36562.service - OpenSSH per-connection server daemon (103.155.161.90:36562). May 15 01:18:02.010069 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3124973013-merged.mount: Deactivated successfully. May 15 01:18:02.394456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138982128.mount: Deactivated successfully. May 15 01:18:02.924285 sshd[3235]: Invalid user from 103.155.161.90 port 36562 May 15 01:18:03.713469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 01:18:03.726997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:03.830060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:03.833374 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:18:03.865198 kubelet[3309]: E0515 01:18:03.865160 3309 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:18:03.868007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:18:03.868147 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:18:03.869013 systemd[1]: kubelet.service: Consumed 133ms CPU time, 117.5M memory peak. May 15 01:18:03.934907 containerd[2704]: time="2025-05-15T01:18:03.934870315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:03.935305 containerd[2704]: time="2025-05-15T01:18:03.935244974Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 15 01:18:03.936027 containerd[2704]: time="2025-05-15T01:18:03.936006021Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:03.938768 containerd[2704]: time="2025-05-15T01:18:03.938746765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:03.939961 containerd[2704]: time="2025-05-15T01:18:03.939932779Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.080191766s" May 15 01:18:03.940013 containerd[2704]: time="2025-05-15T01:18:03.939970361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 15 01:18:03.940581 containerd[2704]: time="2025-05-15T01:18:03.940563887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 01:18:05.619365 containerd[2704]: time="2025-05-15T01:18:05.619332995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:05.619563 containerd[2704]: time="2025-05-15T01:18:05.619407942Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 15 01:18:05.620369 containerd[2704]: time="2025-05-15T01:18:05.620346423Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:05.623100 containerd[2704]: time="2025-05-15T01:18:05.623076064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:05.624140 containerd[2704]: time="2025-05-15T01:18:05.624118865Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.683527926s" May 15 01:18:05.624189 containerd[2704]: time="2025-05-15T01:18:05.624145404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 15 01:18:05.624522 containerd[2704]: time="2025-05-15T01:18:05.624499549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 01:18:07.051669 containerd[2704]: time="2025-05-15T01:18:07.051630830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:07.051969 containerd[2704]: time="2025-05-15T01:18:07.051675260Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 15 01:18:07.052746 containerd[2704]: time="2025-05-15T01:18:07.052726814Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:07.055492 containerd[2704]: time="2025-05-15T01:18:07.055469670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:07.056551 containerd[2704]: time="2025-05-15T01:18:07.056525495Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.43199306s" May 15 01:18:07.056576 containerd[2704]: time="2025-05-15T01:18:07.056558269Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 15 01:18:07.056872 containerd[2704]: time="2025-05-15T01:18:07.056849960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 01:18:08.035427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688865571.mount: Deactivated successfully. May 15 01:18:08.410594 containerd[2704]: time="2025-05-15T01:18:08.410434061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:08.410901 containerd[2704]: time="2025-05-15T01:18:08.410458495Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 15 01:18:08.411442 containerd[2704]: time="2025-05-15T01:18:08.411088341Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:08.412989 containerd[2704]: time="2025-05-15T01:18:08.412941150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:08.413616 containerd[2704]: time="2025-05-15T01:18:08.413557862Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.356673291s" May 15 01:18:08.413616 containerd[2704]: time="2025-05-15T01:18:08.413592995Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 01:18:08.413930 containerd[2704]: time="2025-05-15T01:18:08.413904844Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 01:18:08.724920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684187479.mount: Deactivated successfully. May 15 01:18:09.472549 containerd[2704]: time="2025-05-15T01:18:09.472477512Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 15 01:18:09.472549 containerd[2704]: time="2025-05-15T01:18:09.472480227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:09.474769 containerd[2704]: time="2025-05-15T01:18:09.474196539Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:09.478541 containerd[2704]: time="2025-05-15T01:18:09.478510915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:09.479860 containerd[2704]: time="2025-05-15T01:18:09.479825820Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.06588081s" May 15 01:18:09.479917 containerd[2704]: time="2025-05-15T01:18:09.479862075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 01:18:09.480261 containerd[2704]: time="2025-05-15T01:18:09.480242520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 01:18:09.689507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26214234.mount: Deactivated successfully. May 15 01:18:09.690449 containerd[2704]: time="2025-05-15T01:18:09.690425195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:09.690517 containerd[2704]: time="2025-05-15T01:18:09.690483771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 15 01:18:09.691130 containerd[2704]: time="2025-05-15T01:18:09.691112135Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:09.693067 containerd[2704]: time="2025-05-15T01:18:09.693050372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:09.693944 containerd[2704]: time="2025-05-15T01:18:09.693918510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 213.6486ms" May 15 01:18:09.693997 containerd[2704]: time="2025-05-15T01:18:09.693947499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 01:18:09.694312 containerd[2704]: time="2025-05-15T01:18:09.694292925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 01:18:09.876170 sshd[3235]: Connection closed by invalid user 103.155.161.90 port 36562 [preauth] May 15 01:18:09.878053 systemd[1]: sshd@7-147.28.151.170:22-103.155.161.90:36562.service: Deactivated successfully. May 15 01:18:10.085924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298199104.mount: Deactivated successfully. May 15 01:18:13.405064 containerd[2704]: time="2025-05-15T01:18:13.405020742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:13.405488 containerd[2704]: time="2025-05-15T01:18:13.405067638Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 15 01:18:13.406248 containerd[2704]: time="2025-05-15T01:18:13.406224811Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:13.409472 containerd[2704]: time="2025-05-15T01:18:13.409449308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:13.410704 containerd[2704]: time="2025-05-15T01:18:13.410688729Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.71636761s" May 15 01:18:13.410736 containerd[2704]: time="2025-05-15T01:18:13.410709740Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 15 01:18:13.963427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 01:18:13.974057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:14.070794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:14.074104 (kubelet)[3537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 01:18:14.104675 kubelet[3537]: E0515 01:18:14.104637 3537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:18:14.106785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:18:14.106930 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:18:14.108933 systemd[1]: kubelet.service: Consumed 125ms CPU time, 116M memory peak. May 15 01:18:20.429472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:20.429604 systemd[1]: kubelet.service: Consumed 125ms CPU time, 116M memory peak. May 15 01:18:20.440139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:20.457478 systemd[1]: Reload requested from client PID 3575 ('systemctl') (unit session-9.scope)... May 15 01:18:20.457489 systemd[1]: Reloading... May 15 01:18:20.542863 zram_generator::config[3627]: No configuration found. May 15 01:18:20.632599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:18:20.724293 systemd[1]: Reloading finished in 266 ms. May 15 01:18:20.774833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:20.778226 (kubelet)[3679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 01:18:20.779071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:20.779557 systemd[1]: kubelet.service: Deactivated successfully. May 15 01:18:20.779754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:20.779791 systemd[1]: kubelet.service: Consumed 79ms CPU time, 90.3M memory peak. May 15 01:18:20.782422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:20.882893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:20.886218 (kubelet)[3691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 01:18:20.916842 kubelet[3691]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:18:20.916842 kubelet[3691]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 01:18:20.916842 kubelet[3691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:18:20.917158 kubelet[3691]: I0515 01:18:20.916904 3691 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 01:18:21.847747 kubelet[3691]: I0515 01:18:21.847639 3691 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 01:18:21.847747 kubelet[3691]: I0515 01:18:21.847671 3691 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 01:18:21.847943 kubelet[3691]: I0515 01:18:21.847910 3691 server.go:954] "Client rotation is on, will bootstrap in background" May 15 01:18:21.881192 kubelet[3691]: E0515 01:18:21.881167 3691 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.151.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.151.170:6443: connect: connection refused" logger="UnhandledError" May 15 01:18:21.882775 kubelet[3691]: I0515 01:18:21.882751 3691 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:18:21.888284 kubelet[3691]: E0515 01:18:21.888263 3691 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 01:18:21.888311 kubelet[3691]: I0515 01:18:21.888283 3691 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 01:18:21.908213 kubelet[3691]: I0515 01:18:21.908189 3691 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 01:18:21.909410 kubelet[3691]: I0515 01:18:21.909373 3691 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 01:18:21.909568 kubelet[3691]: I0515 01:18:21.909412 3691 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-c9ea1a9895","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 01:18:21.909648 kubelet[3691]: I0515 01:18:21.909641 3691 topology_manager.go:138] "Creating topology manager with none policy" May 15 01:18:21.909672 kubelet[3691]: I0515 01:18:21.909650 3691 container_manager_linux.go:304] "Creating device plugin manager" May 15 01:18:21.909981 kubelet[3691]: I0515 01:18:21.909965 3691 state_mem.go:36] "Initialized new in-memory state store" May 15 01:18:21.912755 kubelet[3691]: I0515 01:18:21.912737 3691 kubelet.go:446] "Attempting to sync node with API server" May 15 01:18:21.912783 kubelet[3691]: I0515 01:18:21.912763 3691 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 01:18:21.912783 kubelet[3691]: I0515 01:18:21.912781 3691 kubelet.go:352] "Adding apiserver pod source" May 15 01:18:21.912820 kubelet[3691]: I0515 01:18:21.912792 3691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 01:18:21.915305 kubelet[3691]: I0515 01:18:21.915285 3691 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 01:18:21.915457 kubelet[3691]: W0515 01:18:21.915423 3691 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.170:6443: connect: connection refused May 15 01:18:21.915499 kubelet[3691]: E0515 01:18:21.915485 3691 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.170:6443: connect: connection refused" logger="UnhandledError" May 15 01:18:21.915877 kubelet[3691]: I0515 01:18:21.915865 3691 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 01:18:21.915939 kubelet[3691]: W0515 01:18:21.915905 3691 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.151.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-c9ea1a9895&limit=500&resourceVersion=0": dial tcp 147.28.151.170:6443: connect: connection refused May 15 01:18:21.915962 kubelet[3691]: E0515 01:18:21.915950 3691 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.151.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-c9ea1a9895&limit=500&resourceVersion=0\": dial tcp 147.28.151.170:6443: connect: connection refused" logger="UnhandledError" May 15 01:18:21.915986 kubelet[3691]: W0515 01:18:21.915978 3691 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 01:18:21.919316 kubelet[3691]: I0515 01:18:21.919295 3691 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 01:18:21.919557 kubelet[3691]: I0515 01:18:21.919330 3691 server.go:1287] "Started kubelet" May 15 01:18:21.919557 kubelet[3691]: I0515 01:18:21.919388 3691 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 01:18:21.919887 kubelet[3691]: I0515 01:18:21.919824 3691 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 01:18:21.920132 kubelet[3691]: I0515 01:18:21.920119 3691 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 01:18:21.920513 kubelet[3691]: I0515 01:18:21.920496 3691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 01:18:21.920627 kubelet[3691]: I0515 01:18:21.920616 3691 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 01:18:21.920667 kubelet[3691]: I0515 01:18:21.920652 3691 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 01:18:21.920690 kubelet[3691]: I0515 01:18:21.920676 3691 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 01:18:21.920712 kubelet[3691]: E0515 01:18:21.920697 3691 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" May 15 01:18:21.920752 kubelet[3691]: I0515 01:18:21.920734 3691 reconciler.go:26] "Reconciler: start to sync state" May 15 01:18:21.920893 kubelet[3691]: E0515 01:18:21.920863 3691 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c9ea1a9895?timeout=10s\": dial tcp 147.28.151.170:6443: connect: connection refused" interval="200ms" May 15 01:18:21.921052 kubelet[3691]: W0515 01:18:21.921017 3691 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.151.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.151.170:6443: connect: connection refused May 15 01:18:21.921075 kubelet[3691]: E0515 01:18:21.921064 3691 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.151.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.151.170:6443: connect: connection refused" logger="UnhandledError" May 15 01:18:21.921110 kubelet[3691]: E0515 01:18:21.921098 3691 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 01:18:21.921131 kubelet[3691]: I0515 01:18:21.921103 3691 factory.go:221] Registration of the systemd container factory successfully May 15 01:18:21.921203 kubelet[3691]: I0515 01:18:21.921189 3691 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 01:18:21.921373 kubelet[3691]: I0515 01:18:21.921360 3691 server.go:490] "Adding debug handlers to kubelet server" May 15 01:18:21.921844 kubelet[3691]: I0515 01:18:21.921830 3691 factory.go:221] Registration of the containerd container factory successfully May 15 01:18:21.922407 kubelet[3691]: E0515 01:18:21.922191 3691 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.151.170:6443/api/v1/namespaces/default/events\": dial tcp 147.28.151.170:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-c9ea1a9895.183f8e898febf762 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-c9ea1a9895,UID:ci-4230.1.1-n-c9ea1a9895,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-c9ea1a9895,},FirstTimestamp:2025-05-15 01:18:21.919311714 +0000 UTC m=+1.030380848,LastTimestamp:2025-05-15 01:18:21.919311714 +0000 UTC m=+1.030380848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-c9ea1a9895,}" May 15 01:18:21.934868 kubelet[3691]: I0515 01:18:21.934815 3691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 01:18:21.935852 kubelet[3691]: I0515 01:18:21.935839 3691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 01:18:21.935880 kubelet[3691]: I0515 01:18:21.935862 3691 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 01:18:21.935909 kubelet[3691]: I0515 01:18:21.935878 3691 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 01:18:21.935909 kubelet[3691]: I0515 01:18:21.935886 3691 kubelet.go:2388] "Starting kubelet main sync loop" May 15 01:18:21.935950 kubelet[3691]: E0515 01:18:21.935922 3691 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 01:18:21.936335 kubelet[3691]: W0515 01:18:21.936304 3691 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.170:6443: connect: connection refused May 15 01:18:21.936360 kubelet[3691]: E0515 01:18:21.936347 3691 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.170:6443: connect: connection refused" logger="UnhandledError" May 15 01:18:21.936728 kubelet[3691]: I0515 01:18:21.936715 3691 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 01:18:21.936748 kubelet[3691]: I0515 01:18:21.936729 3691 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 01:18:21.936748 kubelet[3691]: I0515 01:18:21.936745 3691 state_mem.go:36] "Initialized new in-memory state store" May 15 01:18:21.937485 kubelet[3691]: I0515 01:18:21.937470 3691 policy_none.go:49] "None policy: Start" May 15 01:18:21.937514 kubelet[3691]: I0515 01:18:21.937491 3691 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 01:18:21.937514 kubelet[3691]: I0515 01:18:21.937502 3691 state_mem.go:35] "Initializing new in-memory state store" May 15 01:18:21.941629 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 01:18:21.955896 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 01:18:21.958368 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 01:18:21.972748 kubelet[3691]: I0515 01:18:21.972730 3691 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 01:18:21.972927 kubelet[3691]: I0515 01:18:21.972917 3691 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 01:18:21.972966 kubelet[3691]: I0515 01:18:21.972932 3691 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 01:18:21.973094 kubelet[3691]: I0515 01:18:21.973081 3691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 01:18:21.973572 kubelet[3691]: E0515 01:18:21.973555 3691 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 01:18:21.973607 kubelet[3691]: E0515 01:18:21.973600 3691 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-c9ea1a9895\" not found" May 15 01:18:22.043018 systemd[1]: Created slice kubepods-burstable-podae3f2eb37e34160f9bdfad40cd7080cd.slice - libcontainer container kubepods-burstable-podae3f2eb37e34160f9bdfad40cd7080cd.slice. May 15 01:18:22.066050 kubelet[3691]: E0515 01:18:22.066025 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.067791 systemd[1]: Created slice kubepods-burstable-podc6d17778ef77385f0adab1efbed7c7f4.slice - libcontainer container kubepods-burstable-podc6d17778ef77385f0adab1efbed7c7f4.slice. May 15 01:18:22.068957 kubelet[3691]: E0515 01:18:22.068941 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.075327 kubelet[3691]: I0515 01:18:22.075313 3691 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.075686 kubelet[3691]: E0515 01:18:22.075664 3691 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.151.170:6443/api/v1/nodes\": dial tcp 147.28.151.170:6443: connect: connection refused" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.085136 systemd[1]: Created slice kubepods-burstable-pod94ff37daf81c7978b4ca672d75aac237.slice - libcontainer container kubepods-burstable-pod94ff37daf81c7978b4ca672d75aac237.slice. May 15 01:18:22.086376 kubelet[3691]: E0515 01:18:22.086361 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.121659 kubelet[3691]: E0515 01:18:22.121598 3691 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c9ea1a9895?timeout=10s\": dial tcp 147.28.151.170:6443: connect: connection refused" interval="400ms" May 15 01:18:22.221826 kubelet[3691]: I0515 01:18:22.221797 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae3f2eb37e34160f9bdfad40cd7080cd-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" (UID: \"ae3f2eb37e34160f9bdfad40cd7080cd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.221890 kubelet[3691]: I0515 01:18:22.221839 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.221890 kubelet[3691]: I0515 01:18:22.221865 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.221890 kubelet[3691]: I0515 01:18:22.221881 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94ff37daf81c7978b4ca672d75aac237-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-c9ea1a9895\" (UID: \"94ff37daf81c7978b4ca672d75aac237\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.221955 kubelet[3691]: I0515 01:18:22.221895 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae3f2eb37e34160f9bdfad40cd7080cd-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" (UID: \"ae3f2eb37e34160f9bdfad40cd7080cd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.221955 kubelet[3691]: I0515 01:18:22.221911 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae3f2eb37e34160f9bdfad40cd7080cd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" (UID: \"ae3f2eb37e34160f9bdfad40cd7080cd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.221955 kubelet[3691]: I0515 01:18:22.221936 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.222018 kubelet[3691]: I0515 01:18:22.221962 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.222018 kubelet[3691]: I0515 01:18:22.221978 3691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.277833 kubelet[3691]: I0515 01:18:22.277816 3691 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.278134 kubelet[3691]: E0515 01:18:22.278109 3691 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.151.170:6443/api/v1/nodes\": dial tcp 147.28.151.170:6443: connect: connection refused" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.367380 containerd[2704]: time="2025-05-15T01:18:22.367338084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-c9ea1a9895,Uid:ae3f2eb37e34160f9bdfad40cd7080cd,Namespace:kube-system,Attempt:0,}" May 15 01:18:22.369898 containerd[2704]: time="2025-05-15T01:18:22.369847879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-c9ea1a9895,Uid:c6d17778ef77385f0adab1efbed7c7f4,Namespace:kube-system,Attempt:0,}" May 15 01:18:22.387551 containerd[2704]: time="2025-05-15T01:18:22.387488712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-c9ea1a9895,Uid:94ff37daf81c7978b4ca672d75aac237,Namespace:kube-system,Attempt:0,}" May 15 01:18:22.522213 kubelet[3691]: E0515 01:18:22.522188 3691 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c9ea1a9895?timeout=10s\": dial tcp 147.28.151.170:6443: connect: connection refused" interval="800ms" May 15 01:18:22.647331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231142194.mount: Deactivated successfully. May 15 01:18:22.648052 containerd[2704]: time="2025-05-15T01:18:22.648020538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:18:22.648720 containerd[2704]: time="2025-05-15T01:18:22.648684589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 15 01:18:22.648929 containerd[2704]: time="2025-05-15T01:18:22.648904300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:18:22.649376 containerd[2704]: time="2025-05-15T01:18:22.649351997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 01:18:22.649725 containerd[2704]: time="2025-05-15T01:18:22.649702129Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:18:22.649756 containerd[2704]: time="2025-05-15T01:18:22.649712521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 01:18:22.653067 containerd[2704]: time="2025-05-15T01:18:22.653036292Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:18:22.654497 containerd[2704]: time="2025-05-15T01:18:22.654470632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 287.03966ms" May 15 01:18:22.655118 containerd[2704]: time="2025-05-15T01:18:22.655097551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 285.152867ms" May 15 01:18:22.655561 containerd[2704]: time="2025-05-15T01:18:22.655535656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 01:18:22.657487 containerd[2704]: time="2025-05-15T01:18:22.657461739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 269.901082ms" May 15 01:18:22.680928 kubelet[3691]: I0515 01:18:22.680897 3691 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.681210 kubelet[3691]: E0515 01:18:22.681180 3691 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.151.170:6443/api/v1/nodes\": dial tcp 147.28.151.170:6443: connect: connection refused" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.754264 containerd[2704]: time="2025-05-15T01:18:22.754193405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:22.754264 containerd[2704]: time="2025-05-15T01:18:22.754251641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:22.754317 containerd[2704]: time="2025-05-15T01:18:22.754262432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:22.754341 containerd[2704]: time="2025-05-15T01:18:22.754329461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:22.754757 containerd[2704]: time="2025-05-15T01:18:22.754706572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:22.754777 containerd[2704]: time="2025-05-15T01:18:22.754759531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:22.754796 containerd[2704]: time="2025-05-15T01:18:22.754771202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:22.754860 containerd[2704]: time="2025-05-15T01:18:22.754840269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:22.755765 containerd[2704]: time="2025-05-15T01:18:22.755717437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:22.755796 containerd[2704]: time="2025-05-15T01:18:22.755762682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:22.755796 containerd[2704]: time="2025-05-15T01:18:22.755774113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:22.755860 containerd[2704]: time="2025-05-15T01:18:22.755840942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:22.756015 kubelet[3691]: W0515 01:18:22.755990 3691 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.170:6443: connect: connection refused May 15 01:18:22.756051 kubelet[3691]: E0515 01:18:22.756033 3691 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.170:6443: connect: connection refused" logger="UnhandledError" May 15 01:18:22.782049 systemd[1]: Started cri-containerd-737c228e1c9562b5a1009dfc2ece89a59754b1300b5ec2374eca4dffcfa96eee.scope - libcontainer container 737c228e1c9562b5a1009dfc2ece89a59754b1300b5ec2374eca4dffcfa96eee. May 15 01:18:22.783408 systemd[1]: Started cri-containerd-939a7985c5a2bd7eaec53e1b8fd9eefa66c3b8d7ba3fa463e7714086d5f8f664.scope - libcontainer container 939a7985c5a2bd7eaec53e1b8fd9eefa66c3b8d7ba3fa463e7714086d5f8f664. May 15 01:18:22.784691 systemd[1]: Started cri-containerd-d530f2468d696a609508a18f1977c8f9c8d05a0b5773bfbe9606d230bae4a710.scope - libcontainer container d530f2468d696a609508a18f1977c8f9c8d05a0b5773bfbe9606d230bae4a710. May 15 01:18:22.805300 containerd[2704]: time="2025-05-15T01:18:22.805265963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-c9ea1a9895,Uid:c6d17778ef77385f0adab1efbed7c7f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"737c228e1c9562b5a1009dfc2ece89a59754b1300b5ec2374eca4dffcfa96eee\"" May 15 01:18:22.806001 containerd[2704]: time="2025-05-15T01:18:22.805983453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-c9ea1a9895,Uid:ae3f2eb37e34160f9bdfad40cd7080cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"939a7985c5a2bd7eaec53e1b8fd9eefa66c3b8d7ba3fa463e7714086d5f8f664\"" May 15 01:18:22.807725 containerd[2704]: time="2025-05-15T01:18:22.807638064Z" level=info msg="CreateContainer within sandbox \"939a7985c5a2bd7eaec53e1b8fd9eefa66c3b8d7ba3fa463e7714086d5f8f664\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 01:18:22.807725 containerd[2704]: time="2025-05-15T01:18:22.807687866Z" level=info msg="CreateContainer within sandbox \"737c228e1c9562b5a1009dfc2ece89a59754b1300b5ec2374eca4dffcfa96eee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 01:18:22.808250 containerd[2704]: time="2025-05-15T01:18:22.808227093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-c9ea1a9895,Uid:94ff37daf81c7978b4ca672d75aac237,Namespace:kube-system,Attempt:0,} returns sandbox id \"d530f2468d696a609508a18f1977c8f9c8d05a0b5773bfbe9606d230bae4a710\"" May 15 01:18:22.809831 containerd[2704]: time="2025-05-15T01:18:22.809805722Z" level=info msg="CreateContainer within sandbox \"d530f2468d696a609508a18f1977c8f9c8d05a0b5773bfbe9606d230bae4a710\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 01:18:22.825455 containerd[2704]: time="2025-05-15T01:18:22.825428263Z" level=info msg="CreateContainer within sandbox \"939a7985c5a2bd7eaec53e1b8fd9eefa66c3b8d7ba3fa463e7714086d5f8f664\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b48f9e7f833410869a24dcdcc4d0f0c58f7acde37204a48d94c992d71a4b0529\"" May 15 01:18:22.825624 containerd[2704]: time="2025-05-15T01:18:22.825597893Z" level=info msg="CreateContainer within sandbox \"737c228e1c9562b5a1009dfc2ece89a59754b1300b5ec2374eca4dffcfa96eee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2a0b28476eeae73d5dee0d914527055e7f2073cb5344a0ea6207e34be8278ce0\"" May 15 01:18:22.825922 containerd[2704]: time="2025-05-15T01:18:22.825899741Z" level=info msg="StartContainer for \"b48f9e7f833410869a24dcdcc4d0f0c58f7acde37204a48d94c992d71a4b0529\"" May 15 01:18:22.825976 containerd[2704]: time="2025-05-15T01:18:22.825949064Z" level=info msg="CreateContainer within sandbox \"d530f2468d696a609508a18f1977c8f9c8d05a0b5773bfbe9606d230bae4a710\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c6c697d75a0d21915bfff80bd0b3a562c420921a419bdd254092920ff15c4b37\"" May 15 01:18:22.826031 containerd[2704]: time="2025-05-15T01:18:22.825952101Z" level=info msg="StartContainer for \"2a0b28476eeae73d5dee0d914527055e7f2073cb5344a0ea6207e34be8278ce0\"" May 15 01:18:22.826843 containerd[2704]: time="2025-05-15T01:18:22.826814360Z" level=info msg="StartContainer for \"c6c697d75a0d21915bfff80bd0b3a562c420921a419bdd254092920ff15c4b37\"" May 15 01:18:22.860021 systemd[1]: Started cri-containerd-2a0b28476eeae73d5dee0d914527055e7f2073cb5344a0ea6207e34be8278ce0.scope - libcontainer container 2a0b28476eeae73d5dee0d914527055e7f2073cb5344a0ea6207e34be8278ce0. May 15 01:18:22.861162 systemd[1]: Started cri-containerd-b48f9e7f833410869a24dcdcc4d0f0c58f7acde37204a48d94c992d71a4b0529.scope - libcontainer container b48f9e7f833410869a24dcdcc4d0f0c58f7acde37204a48d94c992d71a4b0529. May 15 01:18:22.862207 systemd[1]: Started cri-containerd-c6c697d75a0d21915bfff80bd0b3a562c420921a419bdd254092920ff15c4b37.scope - libcontainer container c6c697d75a0d21915bfff80bd0b3a562c420921a419bdd254092920ff15c4b37. May 15 01:18:22.885342 containerd[2704]: time="2025-05-15T01:18:22.885305549Z" level=info msg="StartContainer for \"b48f9e7f833410869a24dcdcc4d0f0c58f7acde37204a48d94c992d71a4b0529\" returns successfully" May 15 01:18:22.886925 containerd[2704]: time="2025-05-15T01:18:22.886893012Z" level=info msg="StartContainer for \"c6c697d75a0d21915bfff80bd0b3a562c420921a419bdd254092920ff15c4b37\" returns successfully" May 15 01:18:22.887667 containerd[2704]: time="2025-05-15T01:18:22.887642957Z" level=info msg="StartContainer for \"2a0b28476eeae73d5dee0d914527055e7f2073cb5344a0ea6207e34be8278ce0\" returns successfully" May 15 01:18:22.941123 kubelet[3691]: E0515 01:18:22.941057 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.941769 kubelet[3691]: E0515 01:18:22.941756 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:22.942559 kubelet[3691]: E0515 01:18:22.942547 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:23.483818 kubelet[3691]: I0515 01:18:23.483797 3691 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:23.947440 kubelet[3691]: E0515 01:18:23.947334 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:23.947440 kubelet[3691]: E0515 01:18:23.947389 3691 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.038022 kubelet[3691]: E0515 01:18:24.037984 3691 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-c9ea1a9895\" not found" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.142168 kubelet[3691]: I0515 01:18:24.142135 3691 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.142168 kubelet[3691]: E0515 01:18:24.142164 3691 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230.1.1-n-c9ea1a9895\": node \"ci-4230.1.1-n-c9ea1a9895\" not found" May 15 01:18:24.221569 kubelet[3691]: I0515 01:18:24.221537 3691 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.226184 kubelet[3691]: E0515 01:18:24.226158 3691 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-n-c9ea1a9895\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.226184 kubelet[3691]: I0515 01:18:24.226181 3691 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.227620 kubelet[3691]: E0515 01:18:24.227593 3691 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.227649 kubelet[3691]: I0515 01:18:24.227624 3691 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.229095 kubelet[3691]: E0515 01:18:24.229074 3691 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.915089 kubelet[3691]: I0515 01:18:24.915066 3691 apiserver.go:52] "Watching apiserver" May 15 01:18:24.921218 kubelet[3691]: I0515 01:18:24.921191 3691 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 01:18:24.943416 kubelet[3691]: I0515 01:18:24.943397 3691 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:24.944837 kubelet[3691]: E0515 01:18:24.944816 3691 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-n-c9ea1a9895\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:25.822453 systemd[1]: Reload requested from client PID 4124 ('systemctl') (unit session-9.scope)... May 15 01:18:25.822464 systemd[1]: Reloading... May 15 01:18:25.891866 zram_generator::config[4173]: No configuration found. May 15 01:18:25.981300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:18:26.083330 systemd[1]: Reloading finished in 260 ms. May 15 01:18:26.106631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:26.135666 systemd[1]: kubelet.service: Deactivated successfully. May 15 01:18:26.136934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:26.136994 systemd[1]: kubelet.service: Consumed 1.484s CPU time, 152M memory peak. May 15 01:18:26.148168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 01:18:26.256015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 01:18:26.259534 (kubelet)[4234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 01:18:26.289682 kubelet[4234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:18:26.289682 kubelet[4234]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 01:18:26.289682 kubelet[4234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:18:26.290070 kubelet[4234]: I0515 01:18:26.289739 4234 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 01:18:26.295905 kubelet[4234]: I0515 01:18:26.295881 4234 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 01:18:26.295905 kubelet[4234]: I0515 01:18:26.295904 4234 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 01:18:26.296134 kubelet[4234]: I0515 01:18:26.296122 4234 server.go:954] "Client rotation is on, will bootstrap in background" May 15 01:18:26.297295 kubelet[4234]: I0515 01:18:26.297280 4234 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 01:18:26.299489 kubelet[4234]: I0515 01:18:26.299468 4234 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:18:26.301817 kubelet[4234]: E0515 01:18:26.301791 4234 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 01:18:26.301844 kubelet[4234]: I0515 01:18:26.301820 4234 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 01:18:26.321115 kubelet[4234]: I0515 01:18:26.321075 4234 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 01:18:26.321293 kubelet[4234]: I0515 01:18:26.321267 4234 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 01:18:26.321443 kubelet[4234]: I0515 01:18:26.321289 4234 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-c9ea1a9895","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 01:18:26.321511 kubelet[4234]: I0515 01:18:26.321452 4234 topology_manager.go:138] "Creating topology manager with none policy" May 15 01:18:26.321511 kubelet[4234]: I0515 01:18:26.321461 4234 container_manager_linux.go:304] "Creating device plugin manager" May 15 01:18:26.321556 kubelet[4234]: I0515 01:18:26.321521 4234 state_mem.go:36] "Initialized new in-memory state store" May 15 01:18:26.321817 kubelet[4234]: I0515 01:18:26.321806 4234 kubelet.go:446] "Attempting to sync node with API server" May 15 01:18:26.321842 kubelet[4234]: I0515 01:18:26.321823 4234 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 01:18:26.321842 kubelet[4234]: I0515 01:18:26.321839 4234 kubelet.go:352] "Adding apiserver pod source" May 15 01:18:26.321891 kubelet[4234]: I0515 01:18:26.321848 4234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 01:18:26.322478 kubelet[4234]: I0515 01:18:26.322458 4234 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 01:18:26.322929 kubelet[4234]: I0515 01:18:26.322916 4234 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 01:18:26.323327 kubelet[4234]: I0515 01:18:26.323314 4234 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 01:18:26.323354 kubelet[4234]: I0515 01:18:26.323339 4234 server.go:1287] "Started kubelet" May 15 01:18:26.323421 kubelet[4234]: I0515 01:18:26.323386 4234 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 01:18:26.323480 kubelet[4234]: I0515 01:18:26.323423 4234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 01:18:26.323665 kubelet[4234]: I0515 01:18:26.323651 4234 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 01:18:26.325782 kubelet[4234]: I0515 01:18:26.325764 4234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 01:18:26.325782 kubelet[4234]: I0515 01:18:26.325772 4234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 01:18:26.325852 kubelet[4234]: I0515 01:18:26.325837 4234 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 01:18:26.325882 kubelet[4234]: E0515 01:18:26.325836 4234 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c9ea1a9895\" not found" May 15 01:18:26.325909 kubelet[4234]: I0515 01:18:26.325901 4234 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 01:18:26.326067 kubelet[4234]: I0515 01:18:26.326048 4234 reconciler.go:26] "Reconciler: start to sync state" May 15 01:18:26.326453 kubelet[4234]: E0515 01:18:26.326436 4234 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 01:18:26.326453 kubelet[4234]: I0515 01:18:26.326437 4234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 01:18:26.327090 kubelet[4234]: I0515 01:18:26.327073 4234 server.go:490] "Adding debug handlers to kubelet server" May 15 01:18:26.327116 kubelet[4234]: I0515 01:18:26.327100 4234 factory.go:221] Registration of the containerd container factory successfully May 15 01:18:26.327116 kubelet[4234]: I0515 01:18:26.327115 4234 factory.go:221] Registration of the systemd container factory successfully May 15 01:18:26.333668 kubelet[4234]: I0515 01:18:26.333596 4234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 01:18:26.334635 kubelet[4234]: I0515 01:18:26.334619 4234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 01:18:26.334663 kubelet[4234]: I0515 01:18:26.334636 4234 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 01:18:26.334663 kubelet[4234]: I0515 01:18:26.334655 4234 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 01:18:26.334663 kubelet[4234]: I0515 01:18:26.334663 4234 kubelet.go:2388] "Starting kubelet main sync loop" May 15 01:18:26.334727 kubelet[4234]: E0515 01:18:26.334705 4234 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 01:18:26.356689 kubelet[4234]: I0515 01:18:26.356669 4234 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 01:18:26.356689 kubelet[4234]: I0515 01:18:26.356686 4234 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 01:18:26.356736 kubelet[4234]: I0515 01:18:26.356707 4234 state_mem.go:36] "Initialized new in-memory state store" May 15 01:18:26.356872 kubelet[4234]: I0515 01:18:26.356852 4234 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 01:18:26.356898 kubelet[4234]: I0515 01:18:26.356870 4234 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 01:18:26.356898 kubelet[4234]: I0515 01:18:26.356890 4234 policy_none.go:49] "None policy: Start" May 15 01:18:26.356898 kubelet[4234]: I0515 01:18:26.356898 4234 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 01:18:26.356957 kubelet[4234]: I0515 01:18:26.356907 4234 state_mem.go:35] "Initializing new in-memory state store" May 15 01:18:26.357009 kubelet[4234]: I0515 01:18:26.356999 4234 state_mem.go:75] "Updated machine memory state" May 15 01:18:26.359938 kubelet[4234]: I0515 01:18:26.359923 4234 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 01:18:26.360085 kubelet[4234]: I0515 01:18:26.360072 4234 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 01:18:26.360109 kubelet[4234]: I0515 01:18:26.360085 4234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 01:18:26.360265 kubelet[4234]: I0515 01:18:26.360248 4234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 01:18:26.360758 kubelet[4234]: E0515 01:18:26.360738 4234 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 01:18:26.435833 kubelet[4234]: I0515 01:18:26.435815 4234 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.435869 kubelet[4234]: I0515 01:18:26.435829 4234 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.435920 kubelet[4234]: I0515 01:18:26.435903 4234 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.444237 kubelet[4234]: W0515 01:18:26.444217 4234 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:18:26.444358 kubelet[4234]: W0515 01:18:26.444341 4234 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:18:26.444411 kubelet[4234]: W0515 01:18:26.444391 4234 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:18:26.462948 kubelet[4234]: I0515 01:18:26.462933 4234 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.466824 kubelet[4234]: I0515 01:18:26.466802 4234 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.466890 kubelet[4234]: I0515 01:18:26.466876 4234 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.627782 kubelet[4234]: I0515 01:18:26.627715 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.627782 kubelet[4234]: I0515 01:18:26.627759 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.627907 kubelet[4234]: I0515 01:18:26.627790 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.627907 kubelet[4234]: I0515 01:18:26.627819 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94ff37daf81c7978b4ca672d75aac237-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-c9ea1a9895\" (UID: \"94ff37daf81c7978b4ca672d75aac237\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.627907 kubelet[4234]: I0515 01:18:26.627842 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae3f2eb37e34160f9bdfad40cd7080cd-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" (UID: \"ae3f2eb37e34160f9bdfad40cd7080cd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.627907 kubelet[4234]: I0515 01:18:26.627863 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae3f2eb37e34160f9bdfad40cd7080cd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" (UID: \"ae3f2eb37e34160f9bdfad40cd7080cd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.628036 kubelet[4234]: I0515 01:18:26.627931 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.628036 kubelet[4234]: I0515 01:18:26.627983 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6d17778ef77385f0adab1efbed7c7f4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-c9ea1a9895\" (UID: \"c6d17778ef77385f0adab1efbed7c7f4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:26.628036 kubelet[4234]: I0515 01:18:26.628016 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae3f2eb37e34160f9bdfad40cd7080cd-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" (UID: \"ae3f2eb37e34160f9bdfad40cd7080cd\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:27.322367 kubelet[4234]: I0515 01:18:27.322334 4234 apiserver.go:52] "Watching apiserver" May 15 01:18:27.326368 kubelet[4234]: I0515 01:18:27.326345 4234 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 01:18:27.339651 kubelet[4234]: I0515 01:18:27.339625 4234 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:27.339847 kubelet[4234]: I0515 01:18:27.339753 4234 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:27.342969 kubelet[4234]: W0515 01:18:27.342942 4234 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:18:27.343029 kubelet[4234]: E0515 01:18:27.342991 4234 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-n-c9ea1a9895\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:27.343097 kubelet[4234]: W0515 01:18:27.343074 4234 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 01:18:27.343152 kubelet[4234]: E0515 01:18:27.343135 4234 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-n-c9ea1a9895\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" May 15 01:18:27.363262 kubelet[4234]: I0515 01:18:27.363207 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c9ea1a9895" podStartSLOduration=1.36317735 podStartE2EDuration="1.36317735s" podCreationTimestamp="2025-05-15 01:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:18:27.363151164 +0000 UTC m=+1.100674850" watchObservedRunningTime="2025-05-15 01:18:27.36317735 +0000 UTC m=+1.100701035" May 15 01:18:27.375676 kubelet[4234]: I0515 01:18:27.375639 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c9ea1a9895" podStartSLOduration=1.3756258780000001 podStartE2EDuration="1.375625878s" podCreationTimestamp="2025-05-15 01:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:18:27.375615844 +0000 UTC m=+1.113139570" watchObservedRunningTime="2025-05-15 01:18:27.375625878 +0000 UTC m=+1.113149564" May 15 01:18:27.375755 kubelet[4234]: I0515 01:18:27.375705 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c9ea1a9895" podStartSLOduration=1.375700077 podStartE2EDuration="1.375700077s" podCreationTimestamp="2025-05-15 01:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:18:27.369877909 +0000 UTC m=+1.107401595" watchObservedRunningTime="2025-05-15 01:18:27.375700077 +0000 UTC m=+1.113223763" May 15 01:18:30.721235 sudo[2964]: pam_unix(sudo:session): session closed for user root May 15 01:18:30.775817 kubelet[4234]: I0515 01:18:30.775783 4234 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 01:18:30.776094 containerd[2704]: time="2025-05-15T01:18:30.776065903Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 01:18:30.776260 kubelet[4234]: I0515 01:18:30.776233 4234 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 01:18:30.784151 sshd[2963]: Connection closed by 139.178.68.195 port 58140 May 15 01:18:30.784549 sshd-session[2961]: pam_unix(sshd:session): session closed for user core May 15 01:18:30.787965 systemd[1]: sshd@6-147.28.151.170:22-139.178.68.195:58140.service: Deactivated successfully. May 15 01:18:30.790317 systemd[1]: session-9.scope: Deactivated successfully. May 15 01:18:30.790502 systemd[1]: session-9.scope: Consumed 9.173s CPU time, 252.9M memory peak. May 15 01:18:30.791643 systemd-logind[2688]: Session 9 logged out. Waiting for processes to exit. May 15 01:18:30.792239 systemd-logind[2688]: Removed session 9. May 15 01:18:31.421959 systemd[1]: Created slice kubepods-besteffort-pod5881f33d_bbbf_48f9_a2f9_c2a0253d169f.slice - libcontainer container kubepods-besteffort-pod5881f33d_bbbf_48f9_a2f9_c2a0253d169f.slice. May 15 01:18:31.459087 kubelet[4234]: I0515 01:18:31.459019 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5881f33d-bbbf-48f9-a2f9-c2a0253d169f-kube-proxy\") pod \"kube-proxy-l99hl\" (UID: \"5881f33d-bbbf-48f9-a2f9-c2a0253d169f\") " pod="kube-system/kube-proxy-l99hl" May 15 01:18:31.459087 kubelet[4234]: I0515 01:18:31.459060 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5881f33d-bbbf-48f9-a2f9-c2a0253d169f-lib-modules\") pod \"kube-proxy-l99hl\" (UID: \"5881f33d-bbbf-48f9-a2f9-c2a0253d169f\") " pod="kube-system/kube-proxy-l99hl" May 15 01:18:31.459087 kubelet[4234]: I0515 01:18:31.459077 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvrz\" (UniqueName: \"kubernetes.io/projected/5881f33d-bbbf-48f9-a2f9-c2a0253d169f-kube-api-access-6lvrz\") pod \"kube-proxy-l99hl\" (UID: \"5881f33d-bbbf-48f9-a2f9-c2a0253d169f\") " pod="kube-system/kube-proxy-l99hl" May 15 01:18:31.459264 kubelet[4234]: I0515 01:18:31.459162 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5881f33d-bbbf-48f9-a2f9-c2a0253d169f-xtables-lock\") pod \"kube-proxy-l99hl\" (UID: \"5881f33d-bbbf-48f9-a2f9-c2a0253d169f\") " pod="kube-system/kube-proxy-l99hl" May 15 01:18:31.733851 containerd[2704]: time="2025-05-15T01:18:31.733816293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l99hl,Uid:5881f33d-bbbf-48f9-a2f9-c2a0253d169f,Namespace:kube-system,Attempt:0,}" May 15 01:18:31.746246 containerd[2704]: time="2025-05-15T01:18:31.746177792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:31.746281 containerd[2704]: time="2025-05-15T01:18:31.746241165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:31.746281 containerd[2704]: time="2025-05-15T01:18:31.746253239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:31.746349 containerd[2704]: time="2025-05-15T01:18:31.746332325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:31.772035 systemd[1]: Started cri-containerd-193a0bbf6de90ec9262e3baa62006cc3a64289d8e63e727cf357cda300cdba99.scope - libcontainer container 193a0bbf6de90ec9262e3baa62006cc3a64289d8e63e727cf357cda300cdba99. May 15 01:18:31.787934 containerd[2704]: time="2025-05-15T01:18:31.787904299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l99hl,Uid:5881f33d-bbbf-48f9-a2f9-c2a0253d169f,Namespace:kube-system,Attempt:0,} returns sandbox id \"193a0bbf6de90ec9262e3baa62006cc3a64289d8e63e727cf357cda300cdba99\"" May 15 01:18:31.789941 containerd[2704]: time="2025-05-15T01:18:31.789918955Z" level=info msg="CreateContainer within sandbox \"193a0bbf6de90ec9262e3baa62006cc3a64289d8e63e727cf357cda300cdba99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 01:18:31.797860 containerd[2704]: time="2025-05-15T01:18:31.797803654Z" level=info msg="CreateContainer within sandbox \"193a0bbf6de90ec9262e3baa62006cc3a64289d8e63e727cf357cda300cdba99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db33202d75118fa2af21627b3157eff056e1a224b01c30ed60b5699b078c9ff6\"" May 15 01:18:31.798240 containerd[2704]: time="2025-05-15T01:18:31.798217196Z" level=info msg="StartContainer for \"db33202d75118fa2af21627b3157eff056e1a224b01c30ed60b5699b078c9ff6\"" May 15 01:18:31.832035 systemd[1]: Started cri-containerd-db33202d75118fa2af21627b3157eff056e1a224b01c30ed60b5699b078c9ff6.scope - libcontainer container db33202d75118fa2af21627b3157eff056e1a224b01c30ed60b5699b078c9ff6. May 15 01:18:31.851821 containerd[2704]: time="2025-05-15T01:18:31.851790743Z" level=info msg="StartContainer for \"db33202d75118fa2af21627b3157eff056e1a224b01c30ed60b5699b078c9ff6\" returns successfully" May 15 01:18:31.879002 systemd[1]: Created slice kubepods-besteffort-pod210a621d_ab9c_47c5_b877_4b1665c3db48.slice - libcontainer container kubepods-besteffort-pod210a621d_ab9c_47c5_b877_4b1665c3db48.slice. May 15 01:18:31.962442 kubelet[4234]: I0515 01:18:31.962418 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvgcw\" (UniqueName: \"kubernetes.io/projected/210a621d-ab9c-47c5-b877-4b1665c3db48-kube-api-access-tvgcw\") pod \"tigera-operator-789496d6f5-rmxtm\" (UID: \"210a621d-ab9c-47c5-b877-4b1665c3db48\") " pod="tigera-operator/tigera-operator-789496d6f5-rmxtm" May 15 01:18:31.962702 kubelet[4234]: I0515 01:18:31.962454 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/210a621d-ab9c-47c5-b877-4b1665c3db48-var-lib-calico\") pod \"tigera-operator-789496d6f5-rmxtm\" (UID: \"210a621d-ab9c-47c5-b877-4b1665c3db48\") " pod="tigera-operator/tigera-operator-789496d6f5-rmxtm" May 15 01:18:32.181304 containerd[2704]: time="2025-05-15T01:18:32.181217075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-rmxtm,Uid:210a621d-ab9c-47c5-b877-4b1665c3db48,Namespace:tigera-operator,Attempt:0,}" May 15 01:18:32.194144 containerd[2704]: time="2025-05-15T01:18:32.194079704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:32.194144 containerd[2704]: time="2025-05-15T01:18:32.194138440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:32.194254 containerd[2704]: time="2025-05-15T01:18:32.194150236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:32.194254 containerd[2704]: time="2025-05-15T01:18:32.194221287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:32.218976 systemd[1]: Started cri-containerd-1a4891e0edbea9a4d5d7020ac14bcbc610f0fbe82f2ca63713fa885a04c9f9f0.scope - libcontainer container 1a4891e0edbea9a4d5d7020ac14bcbc610f0fbe82f2ca63713fa885a04c9f9f0. May 15 01:18:32.241398 containerd[2704]: time="2025-05-15T01:18:32.241369613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-rmxtm,Uid:210a621d-ab9c-47c5-b877-4b1665c3db48,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1a4891e0edbea9a4d5d7020ac14bcbc610f0fbe82f2ca63713fa885a04c9f9f0\"" May 15 01:18:32.242540 containerd[2704]: time="2025-05-15T01:18:32.242519791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 01:18:32.352770 kubelet[4234]: I0515 01:18:32.352726 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l99hl" podStartSLOduration=1.352710573 podStartE2EDuration="1.352710573s" podCreationTimestamp="2025-05-15 01:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:18:32.352674188 +0000 UTC m=+6.090197874" watchObservedRunningTime="2025-05-15 01:18:32.352710573 +0000 UTC m=+6.090234219" May 15 01:18:33.595888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142713274.mount: Deactivated successfully. May 15 01:18:34.028594 containerd[2704]: time="2025-05-15T01:18:34.028553168Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:34.028977 containerd[2704]: time="2025-05-15T01:18:34.028598671Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 01:18:34.029418 containerd[2704]: time="2025-05-15T01:18:34.029399429Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:34.031314 containerd[2704]: time="2025-05-15T01:18:34.031296398Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:34.032099 containerd[2704]: time="2025-05-15T01:18:34.032075723Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.789530062s" May 15 01:18:34.032121 containerd[2704]: time="2025-05-15T01:18:34.032105392Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 01:18:34.033741 containerd[2704]: time="2025-05-15T01:18:34.033718183Z" level=info msg="CreateContainer within sandbox \"1a4891e0edbea9a4d5d7020ac14bcbc610f0fbe82f2ca63713fa885a04c9f9f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 01:18:34.038563 containerd[2704]: time="2025-05-15T01:18:34.038540519Z" level=info msg="CreateContainer within sandbox \"1a4891e0edbea9a4d5d7020ac14bcbc610f0fbe82f2ca63713fa885a04c9f9f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"902266833ee35d904eda55a2e159fe848eb196f6c878fce299481f35d246f7f9\"" May 15 01:18:34.038918 containerd[2704]: time="2025-05-15T01:18:34.038895593Z" level=info msg="StartContainer for \"902266833ee35d904eda55a2e159fe848eb196f6c878fce299481f35d246f7f9\"" May 15 01:18:34.068040 systemd[1]: Started cri-containerd-902266833ee35d904eda55a2e159fe848eb196f6c878fce299481f35d246f7f9.scope - libcontainer container 902266833ee35d904eda55a2e159fe848eb196f6c878fce299481f35d246f7f9. May 15 01:18:34.084859 containerd[2704]: time="2025-05-15T01:18:34.084830684Z" level=info msg="StartContainer for \"902266833ee35d904eda55a2e159fe848eb196f6c878fce299481f35d246f7f9\" returns successfully" May 15 01:18:34.357064 kubelet[4234]: I0515 01:18:34.356981 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-rmxtm" podStartSLOduration=1.5663842639999999 podStartE2EDuration="3.356965976s" podCreationTimestamp="2025-05-15 01:18:31 +0000 UTC" firstStartedPulling="2025-05-15 01:18:32.242155817 +0000 UTC m=+5.979679503" lastFinishedPulling="2025-05-15 01:18:34.032737569 +0000 UTC m=+7.770261215" observedRunningTime="2025-05-15 01:18:34.356885925 +0000 UTC m=+8.094409611" watchObservedRunningTime="2025-05-15 01:18:34.356965976 +0000 UTC m=+8.094489662" May 15 01:18:34.742960 update_engine[2696]: I20250515 01:18:34.742897 2696 update_attempter.cc:509] Updating boot flags... May 15 01:18:34.772868 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (4847) May 15 01:18:34.801866 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (4846) May 15 01:18:37.852211 systemd[1]: Created slice kubepods-besteffort-pod5d17c1df_9021_40c0_9193_0a2373adc971.slice - libcontainer container kubepods-besteffort-pod5d17c1df_9021_40c0_9193_0a2373adc971.slice. May 15 01:18:37.856567 systemd[1]: Created slice kubepods-besteffort-pod1e690c24_364f_4772_832e_26caa1e545f0.slice - libcontainer container kubepods-besteffort-pod1e690c24_364f_4772_832e_26caa1e545f0.slice. May 15 01:18:37.893957 kubelet[4234]: I0515 01:18:37.893922 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4wb\" (UniqueName: \"kubernetes.io/projected/1e690c24-364f-4772-832e-26caa1e545f0-kube-api-access-7z4wb\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894588 kubelet[4234]: I0515 01:18:37.894306 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-cni-net-dir\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894588 kubelet[4234]: I0515 01:18:37.894339 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5d17c1df-9021-40c0-9193-0a2373adc971-typha-certs\") pod \"calico-typha-6cb9cfc57-q2gn8\" (UID: \"5d17c1df-9021-40c0-9193-0a2373adc971\") " pod="calico-system/calico-typha-6cb9cfc57-q2gn8" May 15 01:18:37.894588 kubelet[4234]: I0515 01:18:37.894357 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-flexvol-driver-host\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894588 kubelet[4234]: I0515 01:18:37.894376 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d17c1df-9021-40c0-9193-0a2373adc971-tigera-ca-bundle\") pod \"calico-typha-6cb9cfc57-q2gn8\" (UID: \"5d17c1df-9021-40c0-9193-0a2373adc971\") " pod="calico-system/calico-typha-6cb9cfc57-q2gn8" May 15 01:18:37.894588 kubelet[4234]: I0515 01:18:37.894392 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-lib-modules\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894799 kubelet[4234]: I0515 01:18:37.894409 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-policysync\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894799 kubelet[4234]: I0515 01:18:37.894425 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e690c24-364f-4772-832e-26caa1e545f0-tigera-ca-bundle\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894799 kubelet[4234]: I0515 01:18:37.894441 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e690c24-364f-4772-832e-26caa1e545f0-node-certs\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894799 kubelet[4234]: I0515 01:18:37.894460 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-var-run-calico\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894799 kubelet[4234]: I0515 01:18:37.894475 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-var-lib-calico\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894934 kubelet[4234]: I0515 01:18:37.894491 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-cni-log-dir\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894934 kubelet[4234]: I0515 01:18:37.894511 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-xtables-lock\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894934 kubelet[4234]: I0515 01:18:37.894533 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e690c24-364f-4772-832e-26caa1e545f0-cni-bin-dir\") pod \"calico-node-qkcpw\" (UID: \"1e690c24-364f-4772-832e-26caa1e545f0\") " pod="calico-system/calico-node-qkcpw" May 15 01:18:37.894934 kubelet[4234]: I0515 01:18:37.894582 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvlrx\" (UniqueName: \"kubernetes.io/projected/5d17c1df-9021-40c0-9193-0a2373adc971-kube-api-access-fvlrx\") pod \"calico-typha-6cb9cfc57-q2gn8\" (UID: \"5d17c1df-9021-40c0-9193-0a2373adc971\") " pod="calico-system/calico-typha-6cb9cfc57-q2gn8" May 15 01:18:37.908325 kubelet[4234]: E0515 01:18:37.908297 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9rx" podUID="afd4a0f0-7568-4656-98b5-f7eb614f688a" May 15 01:18:37.995749 kubelet[4234]: I0515 01:18:37.995635 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/afd4a0f0-7568-4656-98b5-f7eb614f688a-socket-dir\") pod \"csi-node-driver-jm9rx\" (UID: \"afd4a0f0-7568-4656-98b5-f7eb614f688a\") " pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:37.995749 kubelet[4234]: I0515 01:18:37.995685 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afd4a0f0-7568-4656-98b5-f7eb614f688a-registration-dir\") pod \"csi-node-driver-jm9rx\" (UID: \"afd4a0f0-7568-4656-98b5-f7eb614f688a\") " pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:37.995749 kubelet[4234]: I0515 01:18:37.995754 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afd4a0f0-7568-4656-98b5-f7eb614f688a-kubelet-dir\") pod \"csi-node-driver-jm9rx\" (UID: \"afd4a0f0-7568-4656-98b5-f7eb614f688a\") " pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:37.995935 kubelet[4234]: I0515 01:18:37.995825 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/afd4a0f0-7568-4656-98b5-f7eb614f688a-varrun\") pod \"csi-node-driver-jm9rx\" (UID: \"afd4a0f0-7568-4656-98b5-f7eb614f688a\") " pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:37.996964 kubelet[4234]: I0515 01:18:37.996638 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkl2\" (UniqueName: \"kubernetes.io/projected/afd4a0f0-7568-4656-98b5-f7eb614f688a-kube-api-access-jwkl2\") pod \"csi-node-driver-jm9rx\" (UID: \"afd4a0f0-7568-4656-98b5-f7eb614f688a\") " pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:37.997931 kubelet[4234]: E0515 01:18:37.997912 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:37.997980 kubelet[4234]: W0515 01:18:37.997932 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:37.998499 kubelet[4234]: E0515 01:18:37.997997 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:37.999285 kubelet[4234]: E0515 01:18:37.999275 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:37.999312 kubelet[4234]: W0515 01:18:37.999286 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:37.999312 kubelet[4234]: E0515 01:18:37.999300 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:37.999534 kubelet[4234]: E0515 01:18:37.999526 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:37.999557 kubelet[4234]: W0515 01:18:37.999534 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:37.999557 kubelet[4234]: E0515 01:18:37.999541 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.005025 kubelet[4234]: E0515 01:18:38.005012 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.005025 kubelet[4234]: W0515 01:18:38.005024 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.005087 kubelet[4234]: E0515 01:18:38.005035 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.005376 kubelet[4234]: E0515 01:18:38.005362 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.005403 kubelet[4234]: W0515 01:18:38.005376 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.005403 kubelet[4234]: E0515 01:18:38.005389 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.097699 kubelet[4234]: E0515 01:18:38.097669 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.097699 kubelet[4234]: W0515 01:18:38.097685 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.097699 kubelet[4234]: E0515 01:18:38.097700 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.098015 kubelet[4234]: E0515 01:18:38.098001 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.098015 kubelet[4234]: W0515 01:18:38.098011 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.098072 kubelet[4234]: E0515 01:18:38.098023 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.098291 kubelet[4234]: E0515 01:18:38.098271 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.098313 kubelet[4234]: W0515 01:18:38.098289 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.098313 kubelet[4234]: E0515 01:18:38.098308 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.098535 kubelet[4234]: E0515 01:18:38.098524 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.098535 kubelet[4234]: W0515 01:18:38.098532 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.098579 kubelet[4234]: E0515 01:18:38.098544 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.098738 kubelet[4234]: E0515 01:18:38.098728 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.098738 kubelet[4234]: W0515 01:18:38.098736 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.098782 kubelet[4234]: E0515 01:18:38.098745 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.098983 kubelet[4234]: E0515 01:18:38.098972 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.098983 kubelet[4234]: W0515 01:18:38.098980 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.099038 kubelet[4234]: E0515 01:18:38.098992 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.099158 kubelet[4234]: E0515 01:18:38.099147 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.099158 kubelet[4234]: W0515 01:18:38.099155 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.099197 kubelet[4234]: E0515 01:18:38.099165 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.099379 kubelet[4234]: E0515 01:18:38.099371 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.099401 kubelet[4234]: W0515 01:18:38.099379 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.099429 kubelet[4234]: E0515 01:18:38.099397 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.099560 kubelet[4234]: E0515 01:18:38.099553 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.099585 kubelet[4234]: W0515 01:18:38.099560 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.099607 kubelet[4234]: E0515 01:18:38.099584 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.099710 kubelet[4234]: E0515 01:18:38.099702 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.099732 kubelet[4234]: W0515 01:18:38.099709 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.099732 kubelet[4234]: E0515 01:18:38.099720 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.099994 kubelet[4234]: E0515 01:18:38.099981 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.100020 kubelet[4234]: W0515 01:18:38.099995 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.100020 kubelet[4234]: E0515 01:18:38.100011 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.100285 kubelet[4234]: E0515 01:18:38.100271 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.100285 kubelet[4234]: W0515 01:18:38.100282 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.100329 kubelet[4234]: E0515 01:18:38.100296 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.100527 kubelet[4234]: E0515 01:18:38.100516 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.100527 kubelet[4234]: W0515 01:18:38.100525 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.100567 kubelet[4234]: E0515 01:18:38.100538 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.100683 kubelet[4234]: E0515 01:18:38.100673 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.100683 kubelet[4234]: W0515 01:18:38.100680 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.100726 kubelet[4234]: E0515 01:18:38.100690 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.100946 kubelet[4234]: E0515 01:18:38.100938 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.100970 kubelet[4234]: W0515 01:18:38.100945 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.100970 kubelet[4234]: E0515 01:18:38.100961 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.101091 kubelet[4234]: E0515 01:18:38.101084 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.101091 kubelet[4234]: W0515 01:18:38.101090 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.101176 kubelet[4234]: E0515 01:18:38.101105 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.101292 kubelet[4234]: E0515 01:18:38.101284 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.101292 kubelet[4234]: W0515 01:18:38.101291 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.101332 kubelet[4234]: E0515 01:18:38.101304 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.101487 kubelet[4234]: E0515 01:18:38.101480 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.101487 kubelet[4234]: W0515 01:18:38.101486 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.101532 kubelet[4234]: E0515 01:18:38.101496 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.101696 kubelet[4234]: E0515 01:18:38.101689 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.101716 kubelet[4234]: W0515 01:18:38.101695 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.101716 kubelet[4234]: E0515 01:18:38.101705 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.101956 kubelet[4234]: E0515 01:18:38.101948 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.101977 kubelet[4234]: W0515 01:18:38.101955 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.101977 kubelet[4234]: E0515 01:18:38.101965 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.102134 kubelet[4234]: E0515 01:18:38.102127 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.102155 kubelet[4234]: W0515 01:18:38.102134 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.102155 kubelet[4234]: E0515 01:18:38.102144 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.102335 kubelet[4234]: E0515 01:18:38.102287 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.102335 kubelet[4234]: W0515 01:18:38.102294 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.102335 kubelet[4234]: E0515 01:18:38.102303 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.102466 kubelet[4234]: E0515 01:18:38.102458 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.102466 kubelet[4234]: W0515 01:18:38.102465 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.102512 kubelet[4234]: E0515 01:18:38.102482 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.102710 kubelet[4234]: E0515 01:18:38.102640 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.102710 kubelet[4234]: W0515 01:18:38.102647 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.102710 kubelet[4234]: E0515 01:18:38.102654 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.102984 kubelet[4234]: E0515 01:18:38.102970 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.102984 kubelet[4234]: W0515 01:18:38.102981 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.103045 kubelet[4234]: E0515 01:18:38.102990 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.110965 kubelet[4234]: E0515 01:18:38.110951 4234 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:18:38.110965 kubelet[4234]: W0515 01:18:38.110961 4234 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:18:38.111071 kubelet[4234]: E0515 01:18:38.110973 4234 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:18:38.155097 containerd[2704]: time="2025-05-15T01:18:38.155058852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cb9cfc57-q2gn8,Uid:5d17c1df-9021-40c0-9193-0a2373adc971,Namespace:calico-system,Attempt:0,}" May 15 01:18:38.158628 containerd[2704]: time="2025-05-15T01:18:38.158593487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkcpw,Uid:1e690c24-364f-4772-832e-26caa1e545f0,Namespace:calico-system,Attempt:0,}" May 15 01:18:38.167569 containerd[2704]: time="2025-05-15T01:18:38.167512533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:38.167612 containerd[2704]: time="2025-05-15T01:18:38.167565719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:38.167612 containerd[2704]: time="2025-05-15T01:18:38.167577276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:38.167670 containerd[2704]: time="2025-05-15T01:18:38.167655294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:38.170057 containerd[2704]: time="2025-05-15T01:18:38.170011011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:38.170389 containerd[2704]: time="2025-05-15T01:18:38.170362556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:38.170410 containerd[2704]: time="2025-05-15T01:18:38.170391508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:38.170488 containerd[2704]: time="2025-05-15T01:18:38.170473165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:38.188976 systemd[1]: Started cri-containerd-52b1d4f2da27f2c7bacf9c80f44fa245a5248eedd648a80f8f1fb3ae408cea6d.scope - libcontainer container 52b1d4f2da27f2c7bacf9c80f44fa245a5248eedd648a80f8f1fb3ae408cea6d. May 15 01:18:38.191236 systemd[1]: Started cri-containerd-d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1.scope - libcontainer container d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1. May 15 01:18:38.206515 containerd[2704]: time="2025-05-15T01:18:38.206482658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkcpw,Uid:1e690c24-364f-4772-832e-26caa1e545f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\"" May 15 01:18:38.207527 containerd[2704]: time="2025-05-15T01:18:38.207504979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 01:18:38.211500 containerd[2704]: time="2025-05-15T01:18:38.211477735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cb9cfc57-q2gn8,Uid:5d17c1df-9021-40c0-9193-0a2373adc971,Namespace:calico-system,Attempt:0,} returns sandbox id \"52b1d4f2da27f2c7bacf9c80f44fa245a5248eedd648a80f8f1fb3ae408cea6d\"" May 15 01:18:38.639441 containerd[2704]: time="2025-05-15T01:18:38.639398835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:38.639543 containerd[2704]: time="2025-05-15T01:18:38.639439144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 01:18:38.640159 containerd[2704]: time="2025-05-15T01:18:38.640130556Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:38.642024 containerd[2704]: time="2025-05-15T01:18:38.642001485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:38.642686 containerd[2704]: time="2025-05-15T01:18:38.642668343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 435.137531ms" May 15 01:18:38.642722 containerd[2704]: time="2025-05-15T01:18:38.642690017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 01:18:38.643584 containerd[2704]: time="2025-05-15T01:18:38.643558500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 01:18:38.644668 containerd[2704]: time="2025-05-15T01:18:38.644645643Z" level=info msg="CreateContainer within sandbox \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 01:18:38.650239 containerd[2704]: time="2025-05-15T01:18:38.650208925Z" level=info msg="CreateContainer within sandbox \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf\"" May 15 01:18:38.650838 containerd[2704]: time="2025-05-15T01:18:38.650533876Z" level=info msg="StartContainer for \"7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf\"" May 15 01:18:38.677023 systemd[1]: Started cri-containerd-7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf.scope - libcontainer container 7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf. May 15 01:18:38.696778 containerd[2704]: time="2025-05-15T01:18:38.696748704Z" level=info msg="StartContainer for \"7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf\" returns successfully" May 15 01:18:38.709724 systemd[1]: cri-containerd-7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf.scope: Deactivated successfully. May 15 01:18:39.036508 containerd[2704]: time="2025-05-15T01:18:39.036456325Z" level=info msg="shim disconnected" id=7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf namespace=k8s.io May 15 01:18:39.036508 containerd[2704]: time="2025-05-15T01:18:39.036505153Z" level=warning msg="cleaning up after shim disconnected" id=7779875a66a9ee5998561943146cb5dca87340b926fc5eac7e8ea6e30cdb93bf namespace=k8s.io May 15 01:18:39.036508 containerd[2704]: time="2025-05-15T01:18:39.036512431Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 01:18:39.335634 kubelet[4234]: E0515 01:18:39.335555 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9rx" podUID="afd4a0f0-7568-4656-98b5-f7eb614f688a" May 15 01:18:39.487157 containerd[2704]: time="2025-05-15T01:18:39.487111590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 01:18:39.487157 containerd[2704]: time="2025-05-15T01:18:39.487112350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:39.487917 containerd[2704]: time="2025-05-15T01:18:39.487893590Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:39.489716 containerd[2704]: time="2025-05-15T01:18:39.489690690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:39.490381 containerd[2704]: time="2025-05-15T01:18:39.490357999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 846.769947ms" May 15 01:18:39.490414 containerd[2704]: time="2025-05-15T01:18:39.490387392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 01:18:39.491177 containerd[2704]: time="2025-05-15T01:18:39.491160434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 01:18:39.495688 containerd[2704]: time="2025-05-15T01:18:39.495663682Z" level=info msg="CreateContainer within sandbox \"52b1d4f2da27f2c7bacf9c80f44fa245a5248eedd648a80f8f1fb3ae408cea6d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 01:18:39.500782 containerd[2704]: time="2025-05-15T01:18:39.500754579Z" level=info msg="CreateContainer within sandbox \"52b1d4f2da27f2c7bacf9c80f44fa245a5248eedd648a80f8f1fb3ae408cea6d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"66d44a8341fd0677b7d462ae645b85d7dce6346dc0a1150bcefd7f0156b0b724\"" May 15 01:18:39.501136 containerd[2704]: time="2025-05-15T01:18:39.501115807Z" level=info msg="StartContainer for \"66d44a8341fd0677b7d462ae645b85d7dce6346dc0a1150bcefd7f0156b0b724\"" May 15 01:18:39.537040 systemd[1]: Started cri-containerd-66d44a8341fd0677b7d462ae645b85d7dce6346dc0a1150bcefd7f0156b0b724.scope - libcontainer container 66d44a8341fd0677b7d462ae645b85d7dce6346dc0a1150bcefd7f0156b0b724. May 15 01:18:39.560915 containerd[2704]: time="2025-05-15T01:18:39.560886355Z" level=info msg="StartContainer for \"66d44a8341fd0677b7d462ae645b85d7dce6346dc0a1150bcefd7f0156b0b724\" returns successfully" May 15 01:18:40.907486 containerd[2704]: time="2025-05-15T01:18:40.907447191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:40.907789 containerd[2704]: time="2025-05-15T01:18:40.907466267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 01:18:40.908178 containerd[2704]: time="2025-05-15T01:18:40.908159620Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:40.910037 containerd[2704]: time="2025-05-15T01:18:40.910015615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:40.910812 containerd[2704]: time="2025-05-15T01:18:40.910786790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.419597684s" May 15 01:18:40.910840 containerd[2704]: time="2025-05-15T01:18:40.910819782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 01:18:40.912480 containerd[2704]: time="2025-05-15T01:18:40.912456470Z" level=info msg="CreateContainer within sandbox \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 01:18:40.918336 containerd[2704]: time="2025-05-15T01:18:40.918309186Z" level=info msg="CreateContainer within sandbox \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0\"" May 15 01:18:40.918645 containerd[2704]: time="2025-05-15T01:18:40.918625150Z" level=info msg="StartContainer for \"561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0\"" May 15 01:18:40.964978 systemd[1]: Started cri-containerd-561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0.scope - libcontainer container 561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0. May 15 01:18:40.985288 containerd[2704]: time="2025-05-15T01:18:40.985261288Z" level=info msg="StartContainer for \"561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0\" returns successfully" May 15 01:18:41.335735 kubelet[4234]: E0515 01:18:41.335687 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9rx" podUID="afd4a0f0-7568-4656-98b5-f7eb614f688a" May 15 01:18:41.364768 containerd[2704]: time="2025-05-15T01:18:41.364728435Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 01:18:41.366311 systemd[1]: cri-containerd-561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0.scope: Deactivated successfully. May 15 01:18:41.366622 systemd[1]: cri-containerd-561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0.scope: Consumed 889ms CPU time, 183M memory peak, 150.3M written to disk. May 15 01:18:41.377468 kubelet[4234]: I0515 01:18:41.377421 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cb9cfc57-q2gn8" podStartSLOduration=3.098498036 podStartE2EDuration="4.377403625s" podCreationTimestamp="2025-05-15 01:18:37 +0000 UTC" firstStartedPulling="2025-05-15 01:18:38.212168947 +0000 UTC m=+11.949692593" lastFinishedPulling="2025-05-15 01:18:39.491074496 +0000 UTC m=+13.228598182" observedRunningTime="2025-05-15 01:18:40.369343853 +0000 UTC m=+14.106867539" watchObservedRunningTime="2025-05-15 01:18:41.377403625 +0000 UTC m=+15.114927271" May 15 01:18:41.381132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0-rootfs.mount: Deactivated successfully. May 15 01:18:41.450645 kubelet[4234]: I0515 01:18:41.450603 4234 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 01:18:41.467682 systemd[1]: Created slice kubepods-burstable-pod65547478_e6ce_4256_b233_a32124af49fe.slice - libcontainer container kubepods-burstable-pod65547478_e6ce_4256_b233_a32124af49fe.slice. May 15 01:18:41.471707 systemd[1]: Created slice kubepods-burstable-podb65fead4_b7f0_4596_810d_a44b901458bf.slice - libcontainer container kubepods-burstable-podb65fead4_b7f0_4596_810d_a44b901458bf.slice. May 15 01:18:41.475475 systemd[1]: Created slice kubepods-besteffort-poda3063761_b690_49aa_b309_e4b3ccc5c48d.slice - libcontainer container kubepods-besteffort-poda3063761_b690_49aa_b309_e4b3ccc5c48d.slice. May 15 01:18:41.479519 systemd[1]: Created slice kubepods-besteffort-podaf3de368_e3ec_4d89_8716_a98767f6e3d9.slice - libcontainer container kubepods-besteffort-podaf3de368_e3ec_4d89_8716_a98767f6e3d9.slice. May 15 01:18:41.482985 systemd[1]: Created slice kubepods-besteffort-pod5acd0bbb_0e0f_417e_9812_ec35b6c162b3.slice - libcontainer container kubepods-besteffort-pod5acd0bbb_0e0f_417e_9812_ec35b6c162b3.slice. May 15 01:18:41.522591 kubelet[4234]: I0515 01:18:41.522556 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnkl2\" (UniqueName: \"kubernetes.io/projected/b65fead4-b7f0-4596-810d-a44b901458bf-kube-api-access-rnkl2\") pod \"coredns-668d6bf9bc-msqz5\" (UID: \"b65fead4-b7f0-4596-810d-a44b901458bf\") " pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:41.522659 kubelet[4234]: I0515 01:18:41.522595 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rcw\" (UniqueName: \"kubernetes.io/projected/a3063761-b690-49aa-b309-e4b3ccc5c48d-kube-api-access-84rcw\") pod \"calico-kube-controllers-df45848cb-96g89\" (UID: \"a3063761-b690-49aa-b309-e4b3ccc5c48d\") " pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:41.522659 kubelet[4234]: I0515 01:18:41.522617 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5acd0bbb-0e0f-417e-9812-ec35b6c162b3-calico-apiserver-certs\") pod \"calico-apiserver-65cbd4f5f8-cmr4c\" (UID: \"5acd0bbb-0e0f-417e-9812-ec35b6c162b3\") " pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:41.522659 kubelet[4234]: I0515 01:18:41.522633 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zfx\" (UniqueName: \"kubernetes.io/projected/5acd0bbb-0e0f-417e-9812-ec35b6c162b3-kube-api-access-h9zfx\") pod \"calico-apiserver-65cbd4f5f8-cmr4c\" (UID: \"5acd0bbb-0e0f-417e-9812-ec35b6c162b3\") " pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:41.522659 kubelet[4234]: I0515 01:18:41.522652 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65547478-e6ce-4256-b233-a32124af49fe-config-volume\") pod \"coredns-668d6bf9bc-9rxxw\" (UID: \"65547478-e6ce-4256-b233-a32124af49fe\") " pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:41.522828 kubelet[4234]: I0515 01:18:41.522749 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b65fead4-b7f0-4596-810d-a44b901458bf-config-volume\") pod \"coredns-668d6bf9bc-msqz5\" (UID: \"b65fead4-b7f0-4596-810d-a44b901458bf\") " pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:41.522828 kubelet[4234]: I0515 01:18:41.522813 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3063761-b690-49aa-b309-e4b3ccc5c48d-tigera-ca-bundle\") pod \"calico-kube-controllers-df45848cb-96g89\" (UID: \"a3063761-b690-49aa-b309-e4b3ccc5c48d\") " pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:41.522911 kubelet[4234]: I0515 01:18:41.522853 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdbm\" (UniqueName: \"kubernetes.io/projected/af3de368-e3ec-4d89-8716-a98767f6e3d9-kube-api-access-gvdbm\") pod \"calico-apiserver-65cbd4f5f8-4mgvv\" (UID: \"af3de368-e3ec-4d89-8716-a98767f6e3d9\") " pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:41.522911 kubelet[4234]: I0515 01:18:41.522895 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdf9\" (UniqueName: \"kubernetes.io/projected/65547478-e6ce-4256-b233-a32124af49fe-kube-api-access-njdf9\") pod \"coredns-668d6bf9bc-9rxxw\" (UID: \"65547478-e6ce-4256-b233-a32124af49fe\") " pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:41.522957 kubelet[4234]: I0515 01:18:41.522928 4234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af3de368-e3ec-4d89-8716-a98767f6e3d9-calico-apiserver-certs\") pod \"calico-apiserver-65cbd4f5f8-4mgvv\" (UID: \"af3de368-e3ec-4d89-8716-a98767f6e3d9\") " pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:41.605957 containerd[2704]: time="2025-05-15T01:18:41.605867414Z" level=info msg="shim disconnected" id=561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0 namespace=k8s.io May 15 01:18:41.605957 containerd[2704]: time="2025-05-15T01:18:41.605916803Z" level=warning msg="cleaning up after shim disconnected" id=561fb09d9747fda9ee36a3fcb184457c6bff585cb292f284fe5d30c5bb9c6cd0 namespace=k8s.io May 15 01:18:41.605957 containerd[2704]: time="2025-05-15T01:18:41.605927361Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 01:18:41.770740 containerd[2704]: time="2025-05-15T01:18:41.770676916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:0,}" May 15 01:18:41.774202 containerd[2704]: time="2025-05-15T01:18:41.774169411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:0,}" May 15 01:18:41.777703 containerd[2704]: time="2025-05-15T01:18:41.777641350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:0,}" May 15 01:18:41.782201 containerd[2704]: time="2025-05-15T01:18:41.782179490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:0,}" May 15 01:18:41.785706 containerd[2704]: time="2025-05-15T01:18:41.785683902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:0,}" May 15 01:18:41.830794 containerd[2704]: time="2025-05-15T01:18:41.830745730Z" level=error msg="Failed to destroy network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.830949 containerd[2704]: time="2025-05-15T01:18:41.830869662Z" level=error msg="Failed to destroy network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831006 containerd[2704]: time="2025-05-15T01:18:41.830978237Z" level=error msg="Failed to destroy network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831149 containerd[2704]: time="2025-05-15T01:18:41.831118326Z" level=error msg="encountered an error cleaning up failed sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831209 containerd[2704]: time="2025-05-15T01:18:41.831194789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831246 containerd[2704]: time="2025-05-15T01:18:41.831131763Z" level=error msg="Failed to destroy network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831305 containerd[2704]: time="2025-05-15T01:18:41.831284689Z" level=error msg="encountered an error cleaning up failed sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831444 containerd[2704]: time="2025-05-15T01:18:41.831340196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831444 containerd[2704]: time="2025-05-15T01:18:41.831224542Z" level=error msg="encountered an error cleaning up failed sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831444 containerd[2704]: time="2025-05-15T01:18:41.831420178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831576 kubelet[4234]: E0515 01:18:41.831375 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831576 kubelet[4234]: E0515 01:18:41.831448 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:41.831576 kubelet[4234]: E0515 01:18:41.831445 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831576 kubelet[4234]: E0515 01:18:41.831466 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:41.831680 containerd[2704]: time="2025-05-15T01:18:41.831433375Z" level=error msg="Failed to destroy network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831680 containerd[2704]: time="2025-05-15T01:18:41.831523955Z" level=error msg="encountered an error cleaning up failed sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831680 containerd[2704]: time="2025-05-15T01:18:41.831579422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831755 kubelet[4234]: E0515 01:18:41.831488 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:41.831755 kubelet[4234]: E0515 01:18:41.831506 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:41.831755 kubelet[4234]: E0515 01:18:41.831509 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-msqz5" podUID="b65fead4-b7f0-4596-810d-a44b901458bf" May 15 01:18:41.831838 containerd[2704]: time="2025-05-15T01:18:41.831727509Z" level=error msg="encountered an error cleaning up failed sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831838 containerd[2704]: time="2025-05-15T01:18:41.831771179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831907 kubelet[4234]: E0515 01:18:41.831535 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" podUID="5acd0bbb-0e0f-417e-9812-ec35b6c162b3" May 15 01:18:41.831907 kubelet[4234]: E0515 01:18:41.831547 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.831907 kubelet[4234]: E0515 01:18:41.831588 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:41.832042 kubelet[4234]: E0515 01:18:41.831606 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:41.832042 kubelet[4234]: E0515 01:18:41.831636 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9rxxw" podUID="65547478-e6ce-4256-b233-a32124af49fe" May 15 01:18:41.832042 kubelet[4234]: E0515 01:18:41.831678 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.832125 kubelet[4234]: E0515 01:18:41.831705 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:41.832125 kubelet[4234]: E0515 01:18:41.831718 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:41.832125 kubelet[4234]: E0515 01:18:41.831747 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df45848cb-96g89" podUID="a3063761-b690-49aa-b309-e4b3ccc5c48d" May 15 01:18:41.832212 kubelet[4234]: E0515 01:18:41.831875 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:41.832212 kubelet[4234]: E0515 01:18:41.831904 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:41.832212 kubelet[4234]: E0515 01:18:41.831920 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:41.832281 kubelet[4234]: E0515 01:18:41.831946 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" podUID="af3de368-e3ec-4d89-8716-a98767f6e3d9" May 15 01:18:42.367105 containerd[2704]: time="2025-05-15T01:18:42.367076042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 01:18:42.367425 kubelet[4234]: I0515 01:18:42.367250 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8" May 15 01:18:42.367794 containerd[2704]: time="2025-05-15T01:18:42.367771256Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:18:42.367946 containerd[2704]: time="2025-05-15T01:18:42.367931062Z" level=info msg="Ensure that sandbox 3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8 in task-service has been cleanup successfully" May 15 01:18:42.368118 containerd[2704]: time="2025-05-15T01:18:42.368105305Z" level=info msg="TearDown network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" successfully" May 15 01:18:42.368118 containerd[2704]: time="2025-05-15T01:18:42.368118183Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" returns successfully" May 15 01:18:42.368335 kubelet[4234]: I0515 01:18:42.368321 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152" May 15 01:18:42.368483 containerd[2704]: time="2025-05-15T01:18:42.368467229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:1,}" May 15 01:18:42.368656 containerd[2704]: time="2025-05-15T01:18:42.368639273Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:18:42.368792 containerd[2704]: time="2025-05-15T01:18:42.368778363Z" level=info msg="Ensure that sandbox 478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152 in task-service has been cleanup successfully" May 15 01:18:42.368945 containerd[2704]: time="2025-05-15T01:18:42.368926852Z" level=info msg="TearDown network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" successfully" May 15 01:18:42.368945 containerd[2704]: time="2025-05-15T01:18:42.368939210Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" returns successfully" May 15 01:18:42.369219 kubelet[4234]: I0515 01:18:42.369158 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6" May 15 01:18:42.369597 containerd[2704]: time="2025-05-15T01:18:42.369363520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:1,}" May 15 01:18:42.369597 containerd[2704]: time="2025-05-15T01:18:42.369502211Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:18:42.369576 systemd[1]: run-netns-cni\x2d99fd7f49\x2d99bb\x2dc17f\x2d47a0\x2d77d1b7b9d781.mount: Deactivated successfully. May 15 01:18:42.370042 containerd[2704]: time="2025-05-15T01:18:42.369650820Z" level=info msg="Ensure that sandbox dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6 in task-service has been cleanup successfully" May 15 01:18:42.370042 containerd[2704]: time="2025-05-15T01:18:42.369797189Z" level=info msg="TearDown network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" successfully" May 15 01:18:42.370042 containerd[2704]: time="2025-05-15T01:18:42.369808586Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" returns successfully" May 15 01:18:42.370222 kubelet[4234]: I0515 01:18:42.370148 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c" May 15 01:18:42.370482 containerd[2704]: time="2025-05-15T01:18:42.370390184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:1,}" May 15 01:18:42.370482 containerd[2704]: time="2025-05-15T01:18:42.370448011Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:18:42.370600 containerd[2704]: time="2025-05-15T01:18:42.370585063Z" level=info msg="Ensure that sandbox 4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c in task-service has been cleanup successfully" May 15 01:18:42.370735 containerd[2704]: time="2025-05-15T01:18:42.370722394Z" level=info msg="TearDown network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" successfully" May 15 01:18:42.370735 containerd[2704]: time="2025-05-15T01:18:42.370734351Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" returns successfully" May 15 01:18:42.371126 containerd[2704]: time="2025-05-15T01:18:42.371103633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:1,}" May 15 01:18:42.371494 systemd[1]: run-netns-cni\x2d73d8cdff\x2d2532\x2d04c1\x2daac8\x2daebec0ceb2ea.mount: Deactivated successfully. May 15 01:18:42.371807 kubelet[4234]: I0515 01:18:42.371576 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c" May 15 01:18:42.371573 systemd[1]: run-netns-cni\x2dfcdd8d81\x2df41c\x2deaf3\x2d337d\x2d8694a30c7b80.mount: Deactivated successfully. May 15 01:18:42.372037 containerd[2704]: time="2025-05-15T01:18:42.372018880Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:18:42.372226 containerd[2704]: time="2025-05-15T01:18:42.372205041Z" level=info msg="Ensure that sandbox cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c in task-service has been cleanup successfully" May 15 01:18:42.372448 containerd[2704]: time="2025-05-15T01:18:42.372352610Z" level=info msg="TearDown network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" successfully" May 15 01:18:42.372448 containerd[2704]: time="2025-05-15T01:18:42.372364567Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" returns successfully" May 15 01:18:42.372779 containerd[2704]: time="2025-05-15T01:18:42.372717213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:1,}" May 15 01:18:42.374576 systemd[1]: run-netns-cni\x2dffcd3651\x2d8d25\x2d156f\x2d2abc\x2d47049de2ccd6.mount: Deactivated successfully. May 15 01:18:42.374646 systemd[1]: run-netns-cni\x2d4968abc9\x2d2f47\x2d5592\x2d39f9\x2dc4465219c422.mount: Deactivated successfully. May 15 01:18:42.415117 containerd[2704]: time="2025-05-15T01:18:42.415063127Z" level=error msg="Failed to destroy network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.415439 containerd[2704]: time="2025-05-15T01:18:42.415416532Z" level=error msg="encountered an error cleaning up failed sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.415488 containerd[2704]: time="2025-05-15T01:18:42.415473160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.415719 kubelet[4234]: E0515 01:18:42.415681 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.415755 kubelet[4234]: E0515 01:18:42.415743 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:42.415787 kubelet[4234]: E0515 01:18:42.415764 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:42.415821 kubelet[4234]: E0515 01:18:42.415802 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" podUID="af3de368-e3ec-4d89-8716-a98767f6e3d9" May 15 01:18:42.417260 containerd[2704]: time="2025-05-15T01:18:42.417224511Z" level=error msg="Failed to destroy network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.417548 containerd[2704]: time="2025-05-15T01:18:42.417526368Z" level=error msg="encountered an error cleaning up failed sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.417599 containerd[2704]: time="2025-05-15T01:18:42.417583036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.417746 containerd[2704]: time="2025-05-15T01:18:42.417715048Z" level=error msg="Failed to destroy network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.417772 kubelet[4234]: E0515 01:18:42.417729 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.417798 kubelet[4234]: E0515 01:18:42.417772 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:42.417798 kubelet[4234]: E0515 01:18:42.417789 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:42.417846 kubelet[4234]: E0515 01:18:42.417826 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9rxxw" podUID="65547478-e6ce-4256-b233-a32124af49fe" May 15 01:18:42.418039 containerd[2704]: time="2025-05-15T01:18:42.418016944Z" level=error msg="encountered an error cleaning up failed sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.418078 containerd[2704]: time="2025-05-15T01:18:42.418062335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.418200 kubelet[4234]: E0515 01:18:42.418176 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.418230 kubelet[4234]: E0515 01:18:42.418216 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:42.418254 kubelet[4234]: E0515 01:18:42.418232 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:42.418278 kubelet[4234]: E0515 01:18:42.418263 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-msqz5" podUID="b65fead4-b7f0-4596-810d-a44b901458bf" May 15 01:18:42.419785 containerd[2704]: time="2025-05-15T01:18:42.419739981Z" level=error msg="Failed to destroy network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.420121 containerd[2704]: time="2025-05-15T01:18:42.420097905Z" level=error msg="encountered an error cleaning up failed sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.420163 containerd[2704]: time="2025-05-15T01:18:42.420144056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.420220 containerd[2704]: time="2025-05-15T01:18:42.420195605Z" level=error msg="Failed to destroy network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.420793 kubelet[4234]: E0515 01:18:42.420286 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.420793 kubelet[4234]: E0515 01:18:42.420332 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:42.420793 kubelet[4234]: E0515 01:18:42.420350 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:42.420917 containerd[2704]: time="2025-05-15T01:18:42.420508419Z" level=error msg="encountered an error cleaning up failed sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.420917 containerd[2704]: time="2025-05-15T01:18:42.420547931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.421011 kubelet[4234]: E0515 01:18:42.420386 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df45848cb-96g89" podUID="a3063761-b690-49aa-b309-e4b3ccc5c48d" May 15 01:18:42.421011 kubelet[4234]: E0515 01:18:42.420696 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:42.421011 kubelet[4234]: E0515 01:18:42.420738 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:42.421126 kubelet[4234]: E0515 01:18:42.420761 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:42.421126 kubelet[4234]: E0515 01:18:42.420790 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" podUID="5acd0bbb-0e0f-417e-9812-ec35b6c162b3" May 15 01:18:43.001493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed-shm.mount: Deactivated successfully. May 15 01:18:43.001576 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6-shm.mount: Deactivated successfully. May 15 01:18:43.001624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a-shm.mount: Deactivated successfully. May 15 01:18:43.339960 systemd[1]: Created slice kubepods-besteffort-podafd4a0f0_7568_4656_98b5_f7eb614f688a.slice - libcontainer container kubepods-besteffort-podafd4a0f0_7568_4656_98b5_f7eb614f688a.slice. May 15 01:18:43.341632 containerd[2704]: time="2025-05-15T01:18:43.341600187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9rx,Uid:afd4a0f0-7568-4656-98b5-f7eb614f688a,Namespace:calico-system,Attempt:0,}" May 15 01:18:43.373737 kubelet[4234]: I0515 01:18:43.373709 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a" May 15 01:18:43.374132 containerd[2704]: time="2025-05-15T01:18:43.374105803Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" May 15 01:18:43.374296 containerd[2704]: time="2025-05-15T01:18:43.374281808Z" level=info msg="Ensure that sandbox fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a in task-service has been cleanup successfully" May 15 01:18:43.374499 containerd[2704]: time="2025-05-15T01:18:43.374481169Z" level=info msg="TearDown network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" successfully" May 15 01:18:43.374530 containerd[2704]: time="2025-05-15T01:18:43.374499485Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" returns successfully" May 15 01:18:43.374697 kubelet[4234]: I0515 01:18:43.374679 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed" May 15 01:18:43.374733 containerd[2704]: time="2025-05-15T01:18:43.374693807Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:18:43.374780 containerd[2704]: time="2025-05-15T01:18:43.374768752Z" level=info msg="TearDown network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" successfully" May 15 01:18:43.374803 containerd[2704]: time="2025-05-15T01:18:43.374780469Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" returns successfully" May 15 01:18:43.375081 containerd[2704]: time="2025-05-15T01:18:43.375064773Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" May 15 01:18:43.375113 containerd[2704]: time="2025-05-15T01:18:43.375090568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:2,}" May 15 01:18:43.375213 containerd[2704]: time="2025-05-15T01:18:43.375200786Z" level=info msg="Ensure that sandbox a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed in task-service has been cleanup successfully" May 15 01:18:43.375362 containerd[2704]: time="2025-05-15T01:18:43.375349437Z" level=info msg="TearDown network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" successfully" May 15 01:18:43.375384 containerd[2704]: time="2025-05-15T01:18:43.375362035Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" returns successfully" May 15 01:18:43.375583 kubelet[4234]: I0515 01:18:43.375566 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6" May 15 01:18:43.375669 containerd[2704]: time="2025-05-15T01:18:43.375651537Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:18:43.375746 containerd[2704]: time="2025-05-15T01:18:43.375734601Z" level=info msg="TearDown network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" successfully" May 15 01:18:43.375769 containerd[2704]: time="2025-05-15T01:18:43.375747198Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" returns successfully" May 15 01:18:43.375873 systemd[1]: run-netns-cni\x2d41468551\x2dae41\x2d19c8\x2dedd7\x2df792a51e60d4.mount: Deactivated successfully. May 15 01:18:43.376028 containerd[2704]: time="2025-05-15T01:18:43.375933642Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" May 15 01:18:43.376074 containerd[2704]: time="2025-05-15T01:18:43.376057737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:2,}" May 15 01:18:43.376101 containerd[2704]: time="2025-05-15T01:18:43.376075134Z" level=info msg="Ensure that sandbox d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6 in task-service has been cleanup successfully" May 15 01:18:43.376288 containerd[2704]: time="2025-05-15T01:18:43.376270935Z" level=info msg="TearDown network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" successfully" May 15 01:18:43.376309 containerd[2704]: time="2025-05-15T01:18:43.376300089Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" returns successfully" May 15 01:18:43.376540 kubelet[4234]: I0515 01:18:43.376526 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0" May 15 01:18:43.376564 containerd[2704]: time="2025-05-15T01:18:43.376536362Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:18:43.376614 containerd[2704]: time="2025-05-15T01:18:43.376603509Z" level=info msg="TearDown network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" successfully" May 15 01:18:43.376634 containerd[2704]: time="2025-05-15T01:18:43.376614907Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" returns successfully" May 15 01:18:43.377025 containerd[2704]: time="2025-05-15T01:18:43.377013668Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" May 15 01:18:43.377054 containerd[2704]: time="2025-05-15T01:18:43.377018827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:2,}" May 15 01:18:43.377154 containerd[2704]: time="2025-05-15T01:18:43.377141363Z" level=info msg="Ensure that sandbox 1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0 in task-service has been cleanup successfully" May 15 01:18:43.377330 containerd[2704]: time="2025-05-15T01:18:43.377316848Z" level=info msg="TearDown network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" successfully" May 15 01:18:43.377351 containerd[2704]: time="2025-05-15T01:18:43.377331605Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" returns successfully" May 15 01:18:43.377523 containerd[2704]: time="2025-05-15T01:18:43.377510450Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:18:43.377583 containerd[2704]: time="2025-05-15T01:18:43.377573358Z" level=info msg="TearDown network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" successfully" May 15 01:18:43.377606 containerd[2704]: time="2025-05-15T01:18:43.377583276Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" returns successfully" May 15 01:18:43.377818 systemd[1]: run-netns-cni\x2d1e696778\x2de35b\x2d623c\x2dff05\x2dbdb885965b32.mount: Deactivated successfully. May 15 01:18:43.377904 systemd[1]: run-netns-cni\x2d4155de95\x2d75c0\x2dd351\x2dad25\x2df2b7ab417d9b.mount: Deactivated successfully. May 15 01:18:43.377969 containerd[2704]: time="2025-05-15T01:18:43.377949283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:2,}" May 15 01:18:43.378051 kubelet[4234]: I0515 01:18:43.378035 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09" May 15 01:18:43.378451 containerd[2704]: time="2025-05-15T01:18:43.378433148Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" May 15 01:18:43.378575 containerd[2704]: time="2025-05-15T01:18:43.378564522Z" level=info msg="Ensure that sandbox 86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09 in task-service has been cleanup successfully" May 15 01:18:43.378733 containerd[2704]: time="2025-05-15T01:18:43.378718531Z" level=info msg="TearDown network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" successfully" May 15 01:18:43.378757 containerd[2704]: time="2025-05-15T01:18:43.378734128Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" returns successfully" May 15 01:18:43.378999 containerd[2704]: time="2025-05-15T01:18:43.378974201Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:18:43.379075 containerd[2704]: time="2025-05-15T01:18:43.379062743Z" level=info msg="TearDown network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" successfully" May 15 01:18:43.379096 containerd[2704]: time="2025-05-15T01:18:43.379075421Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" returns successfully" May 15 01:18:43.379442 containerd[2704]: time="2025-05-15T01:18:43.379423392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:2,}" May 15 01:18:43.385014 containerd[2704]: time="2025-05-15T01:18:43.384980534Z" level=error msg="Failed to destroy network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.385935 containerd[2704]: time="2025-05-15T01:18:43.385888274Z" level=error msg="encountered an error cleaning up failed sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.386016 containerd[2704]: time="2025-05-15T01:18:43.385982896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9rx,Uid:afd4a0f0-7568-4656-98b5-f7eb614f688a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.386613 kubelet[4234]: E0515 01:18:43.386180 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.386613 kubelet[4234]: E0515 01:18:43.386231 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:43.386613 kubelet[4234]: E0515 01:18:43.386251 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:43.386777 kubelet[4234]: E0515 01:18:43.386290 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jm9rx_calico-system(afd4a0f0-7568-4656-98b5-f7eb614f688a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jm9rx_calico-system(afd4a0f0-7568-4656-98b5-f7eb614f688a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm9rx" podUID="afd4a0f0-7568-4656-98b5-f7eb614f688a" May 15 01:18:43.422581 containerd[2704]: time="2025-05-15T01:18:43.422453768Z" level=error msg="Failed to destroy network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.423152 containerd[2704]: time="2025-05-15T01:18:43.423121836Z" level=error msg="encountered an error cleaning up failed sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.423266 containerd[2704]: time="2025-05-15T01:18:43.423245012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.423550 kubelet[4234]: E0515 01:18:43.423514 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.423602 kubelet[4234]: E0515 01:18:43.423574 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:43.423602 kubelet[4234]: E0515 01:18:43.423593 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:43.423678 kubelet[4234]: E0515 01:18:43.423630 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" podUID="af3de368-e3ec-4d89-8716-a98767f6e3d9" May 15 01:18:43.424153 containerd[2704]: time="2025-05-15T01:18:43.424102323Z" level=error msg="Failed to destroy network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.424329 containerd[2704]: time="2025-05-15T01:18:43.424133276Z" level=error msg="Failed to destroy network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.424619 containerd[2704]: time="2025-05-15T01:18:43.424593266Z" level=error msg="encountered an error cleaning up failed sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.424727 containerd[2704]: time="2025-05-15T01:18:43.424707643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.424833 containerd[2704]: time="2025-05-15T01:18:43.424615581Z" level=error msg="encountered an error cleaning up failed sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.424921 containerd[2704]: time="2025-05-15T01:18:43.424902964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.424953 kubelet[4234]: E0515 01:18:43.424925 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.425001 kubelet[4234]: E0515 01:18:43.424971 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:43.425001 kubelet[4234]: E0515 01:18:43.424987 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:43.425049 kubelet[4234]: E0515 01:18:43.425018 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9rxxw" podUID="65547478-e6ce-4256-b233-a32124af49fe" May 15 01:18:43.425266 kubelet[4234]: E0515 01:18:43.425232 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.425332 kubelet[4234]: E0515 01:18:43.425275 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:43.425332 kubelet[4234]: E0515 01:18:43.425291 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:43.425332 kubelet[4234]: E0515 01:18:43.425322 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-msqz5" podUID="b65fead4-b7f0-4596-810d-a44b901458bf" May 15 01:18:43.425609 containerd[2704]: time="2025-05-15T01:18:43.425578471Z" level=error msg="Failed to destroy network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.425906 containerd[2704]: time="2025-05-15T01:18:43.425883291Z" level=error msg="encountered an error cleaning up failed sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.425963 containerd[2704]: time="2025-05-15T01:18:43.425927402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.426095 kubelet[4234]: E0515 01:18:43.426065 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.426130 kubelet[4234]: E0515 01:18:43.426113 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:43.426153 kubelet[4234]: E0515 01:18:43.426131 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:43.426180 kubelet[4234]: E0515 01:18:43.426164 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" podUID="5acd0bbb-0e0f-417e-9812-ec35b6c162b3" May 15 01:18:43.426282 containerd[2704]: time="2025-05-15T01:18:43.426252098Z" level=error msg="Failed to destroy network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.426530 containerd[2704]: time="2025-05-15T01:18:43.426508567Z" level=error msg="encountered an error cleaning up failed sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.426568 containerd[2704]: time="2025-05-15T01:18:43.426552238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.426688 kubelet[4234]: E0515 01:18:43.426665 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:43.426725 kubelet[4234]: E0515 01:18:43.426698 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:43.426725 kubelet[4234]: E0515 01:18:43.426711 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:43.426776 kubelet[4234]: E0515 01:18:43.426741 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df45848cb-96g89" podUID="a3063761-b690-49aa-b309-e4b3ccc5c48d" May 15 01:18:44.002214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000-shm.mount: Deactivated successfully. May 15 01:18:44.002300 systemd[1]: run-netns-cni\x2d1aae21bd\x2d2134\x2d36b1\x2d3c24\x2da3459a9ccf9c.mount: Deactivated successfully. May 15 01:18:44.002344 systemd[1]: run-netns-cni\x2d1c7f614d\x2d20ac\x2dd3db\x2d6f00\x2ddf6412e1db51.mount: Deactivated successfully. May 15 01:18:44.380843 kubelet[4234]: I0515 01:18:44.380755 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8" May 15 01:18:44.381392 containerd[2704]: time="2025-05-15T01:18:44.381213518Z" level=info msg="StopPodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\"" May 15 01:18:44.381392 containerd[2704]: time="2025-05-15T01:18:44.381387406Z" level=info msg="Ensure that sandbox 7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8 in task-service has been cleanup successfully" May 15 01:18:44.381666 containerd[2704]: time="2025-05-15T01:18:44.381570492Z" level=info msg="TearDown network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" successfully" May 15 01:18:44.381666 containerd[2704]: time="2025-05-15T01:18:44.381584529Z" level=info msg="StopPodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" returns successfully" May 15 01:18:44.381912 containerd[2704]: time="2025-05-15T01:18:44.381826645Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" May 15 01:18:44.381912 containerd[2704]: time="2025-05-15T01:18:44.381918308Z" level=info msg="TearDown network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" successfully" May 15 01:18:44.381912 containerd[2704]: time="2025-05-15T01:18:44.381931145Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" returns successfully" May 15 01:18:44.382090 kubelet[4234]: I0515 01:18:44.382005 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61" May 15 01:18:44.382218 containerd[2704]: time="2025-05-15T01:18:44.382197976Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:18:44.382285 containerd[2704]: time="2025-05-15T01:18:44.382273602Z" level=info msg="TearDown network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" successfully" May 15 01:18:44.382314 containerd[2704]: time="2025-05-15T01:18:44.382285040Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" returns successfully" May 15 01:18:44.382363 containerd[2704]: time="2025-05-15T01:18:44.382346708Z" level=info msg="StopPodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\"" May 15 01:18:44.382486 containerd[2704]: time="2025-05-15T01:18:44.382473805Z" level=info msg="Ensure that sandbox eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61 in task-service has been cleanup successfully" May 15 01:18:44.382732 containerd[2704]: time="2025-05-15T01:18:44.382611699Z" level=info msg="TearDown network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" successfully" May 15 01:18:44.382732 containerd[2704]: time="2025-05-15T01:18:44.382624457Z" level=info msg="StopPodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" returns successfully" May 15 01:18:44.382732 containerd[2704]: time="2025-05-15T01:18:44.382633055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:3,}" May 15 01:18:44.382829 containerd[2704]: time="2025-05-15T01:18:44.382804663Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" May 15 01:18:44.382888 containerd[2704]: time="2025-05-15T01:18:44.382876690Z" level=info msg="TearDown network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" successfully" May 15 01:18:44.382888 containerd[2704]: time="2025-05-15T01:18:44.382886568Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" returns successfully" May 15 01:18:44.383110 containerd[2704]: time="2025-05-15T01:18:44.383056497Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:18:44.383152 containerd[2704]: time="2025-05-15T01:18:44.383132283Z" level=info msg="TearDown network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" successfully" May 15 01:18:44.383152 containerd[2704]: time="2025-05-15T01:18:44.383145480Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" returns successfully" May 15 01:18:44.383123 systemd[1]: run-netns-cni\x2d0c9d4e82\x2dd0e0\x2d683f\x2d3133\x2dcd933542c69d.mount: Deactivated successfully. May 15 01:18:44.383363 kubelet[4234]: I0515 01:18:44.383322 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c" May 15 01:18:44.383578 containerd[2704]: time="2025-05-15T01:18:44.383501414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:3,}" May 15 01:18:44.384219 containerd[2704]: time="2025-05-15T01:18:44.384156213Z" level=info msg="StopPodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\"" May 15 01:18:44.384328 containerd[2704]: time="2025-05-15T01:18:44.384297027Z" level=info msg="Ensure that sandbox 86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c in task-service has been cleanup successfully" May 15 01:18:44.384950 containerd[2704]: time="2025-05-15T01:18:44.384490871Z" level=info msg="TearDown network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" successfully" May 15 01:18:44.384950 containerd[2704]: time="2025-05-15T01:18:44.384503549Z" level=info msg="StopPodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" returns successfully" May 15 01:18:44.384950 containerd[2704]: time="2025-05-15T01:18:44.384912233Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" May 15 01:18:44.385051 containerd[2704]: time="2025-05-15T01:18:44.384994778Z" level=info msg="TearDown network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" successfully" May 15 01:18:44.385051 containerd[2704]: time="2025-05-15T01:18:44.385005816Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" returns successfully" May 15 01:18:44.385090 systemd[1]: run-netns-cni\x2d27b41496\x2debfb\x2d05c6\x2d0fc6\x2dc6110e927d9e.mount: Deactivated successfully. May 15 01:18:44.385422 containerd[2704]: time="2025-05-15T01:18:44.385395743Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:18:44.386014 containerd[2704]: time="2025-05-15T01:18:44.385484287Z" level=info msg="TearDown network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" successfully" May 15 01:18:44.386014 containerd[2704]: time="2025-05-15T01:18:44.385495565Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" returns successfully" May 15 01:18:44.386129 kubelet[4234]: I0515 01:18:44.386106 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8" May 15 01:18:44.386378 containerd[2704]: time="2025-05-15T01:18:44.386356205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:3,}" May 15 01:18:44.386635 containerd[2704]: time="2025-05-15T01:18:44.386557928Z" level=info msg="StopPodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\"" May 15 01:18:44.386716 containerd[2704]: time="2025-05-15T01:18:44.386701741Z" level=info msg="Ensure that sandbox 376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8 in task-service has been cleanup successfully" May 15 01:18:44.386869 containerd[2704]: time="2025-05-15T01:18:44.386844915Z" level=info msg="TearDown network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" successfully" May 15 01:18:44.386869 containerd[2704]: time="2025-05-15T01:18:44.386862472Z" level=info msg="StopPodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" returns successfully" May 15 01:18:44.387002 systemd[1]: run-netns-cni\x2dd7d3e539\x2dbea9\x2d6515\x2dc26f\x2d83a21cf3543c.mount: Deactivated successfully. May 15 01:18:44.387104 kubelet[4234]: I0515 01:18:44.387068 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000" May 15 01:18:44.387149 containerd[2704]: time="2025-05-15T01:18:44.387093109Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" May 15 01:18:44.387194 containerd[2704]: time="2025-05-15T01:18:44.387181013Z" level=info msg="TearDown network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" successfully" May 15 01:18:44.387194 containerd[2704]: time="2025-05-15T01:18:44.387191371Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" returns successfully" May 15 01:18:44.387407 containerd[2704]: time="2025-05-15T01:18:44.387382695Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:18:44.387472 containerd[2704]: time="2025-05-15T01:18:44.387453522Z" level=info msg="TearDown network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" successfully" May 15 01:18:44.387502 containerd[2704]: time="2025-05-15T01:18:44.387463880Z" level=info msg="StopPodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\"" May 15 01:18:44.387534 containerd[2704]: time="2025-05-15T01:18:44.387464000Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" returns successfully" May 15 01:18:44.387619 containerd[2704]: time="2025-05-15T01:18:44.387604494Z" level=info msg="Ensure that sandbox 29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000 in task-service has been cleanup successfully" May 15 01:18:44.387858 containerd[2704]: time="2025-05-15T01:18:44.387759425Z" level=info msg="TearDown network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" successfully" May 15 01:18:44.387858 containerd[2704]: time="2025-05-15T01:18:44.387772623Z" level=info msg="StopPodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" returns successfully" May 15 01:18:44.387946 containerd[2704]: time="2025-05-15T01:18:44.387907198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:3,}" May 15 01:18:44.388156 containerd[2704]: time="2025-05-15T01:18:44.388138075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9rx,Uid:afd4a0f0-7568-4656-98b5-f7eb614f688a,Namespace:calico-system,Attempt:1,}" May 15 01:18:44.388567 kubelet[4234]: I0515 01:18:44.388549 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0" May 15 01:18:44.388845 systemd[1]: run-netns-cni\x2dfa910cd5\x2db65e\x2deb13\x2d86c3\x2dd079ede93d27.mount: Deactivated successfully. May 15 01:18:44.388924 systemd[1]: run-netns-cni\x2d595f7f95\x2d6a4c\x2d7f78\x2d92d2\x2d4aa34b8e7999.mount: Deactivated successfully. May 15 01:18:44.390273 containerd[2704]: time="2025-05-15T01:18:44.390195134Z" level=info msg="StopPodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\"" May 15 01:18:44.390503 containerd[2704]: time="2025-05-15T01:18:44.390473763Z" level=info msg="Ensure that sandbox 4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0 in task-service has been cleanup successfully" May 15 01:18:44.391397 containerd[2704]: time="2025-05-15T01:18:44.391372916Z" level=info msg="TearDown network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" successfully" May 15 01:18:44.391424 containerd[2704]: time="2025-05-15T01:18:44.391397511Z" level=info msg="StopPodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" returns successfully" May 15 01:18:44.391724 containerd[2704]: time="2025-05-15T01:18:44.391704655Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" May 15 01:18:44.391798 containerd[2704]: time="2025-05-15T01:18:44.391787199Z" level=info msg="TearDown network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" successfully" May 15 01:18:44.391824 containerd[2704]: time="2025-05-15T01:18:44.391797997Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" returns successfully" May 15 01:18:44.392069 containerd[2704]: time="2025-05-15T01:18:44.392048111Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:18:44.392145 containerd[2704]: time="2025-05-15T01:18:44.392134055Z" level=info msg="TearDown network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" successfully" May 15 01:18:44.392168 containerd[2704]: time="2025-05-15T01:18:44.392145653Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" returns successfully" May 15 01:18:44.392513 containerd[2704]: time="2025-05-15T01:18:44.392495948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:3,}" May 15 01:18:44.444659 containerd[2704]: time="2025-05-15T01:18:44.444603654Z" level=error msg="Failed to destroy network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.445866 containerd[2704]: time="2025-05-15T01:18:44.444955269Z" level=error msg="encountered an error cleaning up failed sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.445866 containerd[2704]: time="2025-05-15T01:18:44.444996101Z" level=error msg="Failed to destroy network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.445866 containerd[2704]: time="2025-05-15T01:18:44.445014698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.445866 containerd[2704]: time="2025-05-15T01:18:44.445277649Z" level=error msg="encountered an error cleaning up failed sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.445866 containerd[2704]: time="2025-05-15T01:18:44.445319242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.446079 kubelet[4234]: E0515 01:18:44.445287 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.446079 kubelet[4234]: E0515 01:18:44.445351 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:44.446079 kubelet[4234]: E0515 01:18:44.445370 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" May 15 01:18:44.446169 kubelet[4234]: E0515 01:18:44.445412 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-cmr4c_calico-apiserver(5acd0bbb-0e0f-417e-9812-ec35b6c162b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" podUID="5acd0bbb-0e0f-417e-9812-ec35b6c162b3" May 15 01:18:44.446169 kubelet[4234]: E0515 01:18:44.445430 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.446169 kubelet[4234]: E0515 01:18:44.445479 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:44.446245 kubelet[4234]: E0515 01:18:44.445496 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df45848cb-96g89" May 15 01:18:44.446245 kubelet[4234]: E0515 01:18:44.445529 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-df45848cb-96g89_calico-system(a3063761-b690-49aa-b309-e4b3ccc5c48d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df45848cb-96g89" podUID="a3063761-b690-49aa-b309-e4b3ccc5c48d" May 15 01:18:44.449065 containerd[2704]: time="2025-05-15T01:18:44.449022596Z" level=error msg="Failed to destroy network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.449396 containerd[2704]: time="2025-05-15T01:18:44.449366332Z" level=error msg="encountered an error cleaning up failed sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.449436 containerd[2704]: time="2025-05-15T01:18:44.449419002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.449596 kubelet[4234]: E0515 01:18:44.449567 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.449655 kubelet[4234]: E0515 01:18:44.449611 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:44.449655 kubelet[4234]: E0515 01:18:44.449630 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msqz5" May 15 01:18:44.449705 kubelet[4234]: E0515 01:18:44.449660 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-msqz5_kube-system(b65fead4-b7f0-4596-810d-a44b901458bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-msqz5" podUID="b65fead4-b7f0-4596-810d-a44b901458bf" May 15 01:18:44.451707 containerd[2704]: time="2025-05-15T01:18:44.451674424Z" level=error msg="Failed to destroy network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452034 containerd[2704]: time="2025-05-15T01:18:44.452013001Z" level=error msg="encountered an error cleaning up failed sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452100 containerd[2704]: time="2025-05-15T01:18:44.452071671Z" level=error msg="Failed to destroy network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452154 containerd[2704]: time="2025-05-15T01:18:44.452071071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452267 kubelet[4234]: E0515 01:18:44.452247 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452299 kubelet[4234]: E0515 01:18:44.452280 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:44.452299 kubelet[4234]: E0515 01:18:44.452294 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" May 15 01:18:44.452345 kubelet[4234]: E0515 01:18:44.452324 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cbd4f5f8-4mgvv_calico-apiserver(af3de368-e3ec-4d89-8716-a98767f6e3d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" podUID="af3de368-e3ec-4d89-8716-a98767f6e3d9" May 15 01:18:44.452436 containerd[2704]: time="2025-05-15T01:18:44.452415207Z" level=error msg="encountered an error cleaning up failed sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452479 containerd[2704]: time="2025-05-15T01:18:44.452464678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9rx,Uid:afd4a0f0-7568-4656-98b5-f7eb614f688a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452516 containerd[2704]: time="2025-05-15T01:18:44.452484594Z" level=error msg="Failed to destroy network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452608 kubelet[4234]: E0515 01:18:44.452584 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452635 kubelet[4234]: E0515 01:18:44.452623 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:44.452657 kubelet[4234]: E0515 01:18:44.452640 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm9rx" May 15 01:18:44.452690 kubelet[4234]: E0515 01:18:44.452670 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jm9rx_calico-system(afd4a0f0-7568-4656-98b5-f7eb614f688a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jm9rx_calico-system(afd4a0f0-7568-4656-98b5-f7eb614f688a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm9rx" podUID="afd4a0f0-7568-4656-98b5-f7eb614f688a" May 15 01:18:44.452789 containerd[2704]: time="2025-05-15T01:18:44.452767982Z" level=error msg="encountered an error cleaning up failed sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452829 containerd[2704]: time="2025-05-15T01:18:44.452813853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452963 kubelet[4234]: E0515 01:18:44.452941 4234 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:18:44.452987 kubelet[4234]: E0515 01:18:44.452977 4234 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:44.453008 kubelet[4234]: E0515 01:18:44.452992 4234 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9rxxw" May 15 01:18:44.453046 kubelet[4234]: E0515 01:18:44.453029 4234 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9rxxw_kube-system(65547478-e6ce-4256-b233-a32124af49fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9rxxw" podUID="65547478-e6ce-4256-b233-a32124af49fe" May 15 01:18:45.002063 systemd[1]: run-netns-cni\x2d4ec4c3ae\x2d29f1\x2dc4e6\x2d384a\x2d078dfc946d9d.mount: Deactivated successfully. May 15 01:18:45.116416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506049799.mount: Deactivated successfully. May 15 01:18:45.136939 containerd[2704]: time="2025-05-15T01:18:45.136868257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 01:18:45.136984 containerd[2704]: time="2025-05-15T01:18:45.136873376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:45.137652 containerd[2704]: time="2025-05-15T01:18:45.137633004Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:45.139298 containerd[2704]: time="2025-05-15T01:18:45.139279358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:45.139876 containerd[2704]: time="2025-05-15T01:18:45.139843340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.772734785s" May 15 01:18:45.139909 containerd[2704]: time="2025-05-15T01:18:45.139879894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 01:18:45.145290 containerd[2704]: time="2025-05-15T01:18:45.145264159Z" level=info msg="CreateContainer within sandbox \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 01:18:45.153390 containerd[2704]: time="2025-05-15T01:18:45.153325798Z" level=info msg="CreateContainer within sandbox \"d22a6c4ebfe9eceb0f4c5e9db083a209bb013a6642a4fe964e300d40da81feb1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8fccf0f47f653186f6d7dc0f1a85339b42be2218076c7b7f7902bf4309c9a311\"" May 15 01:18:45.153737 containerd[2704]: time="2025-05-15T01:18:45.153718370Z" level=info msg="StartContainer for \"8fccf0f47f653186f6d7dc0f1a85339b42be2218076c7b7f7902bf4309c9a311\"" May 15 01:18:45.179970 systemd[1]: Started cri-containerd-8fccf0f47f653186f6d7dc0f1a85339b42be2218076c7b7f7902bf4309c9a311.scope - libcontainer container 8fccf0f47f653186f6d7dc0f1a85339b42be2218076c7b7f7902bf4309c9a311. May 15 01:18:45.201887 containerd[2704]: time="2025-05-15T01:18:45.201849411Z" level=info msg="StartContainer for \"8fccf0f47f653186f6d7dc0f1a85339b42be2218076c7b7f7902bf4309c9a311\" returns successfully" May 15 01:18:45.310329 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 01:18:45.310444 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 01:18:45.391719 kubelet[4234]: I0515 01:18:45.391683 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e" May 15 01:18:45.392150 containerd[2704]: time="2025-05-15T01:18:45.392123803Z" level=info msg="StopPodSandbox for \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\"" May 15 01:18:45.392329 containerd[2704]: time="2025-05-15T01:18:45.392278456Z" level=info msg="Ensure that sandbox 9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e in task-service has been cleanup successfully" May 15 01:18:45.392453 containerd[2704]: time="2025-05-15T01:18:45.392439748Z" level=info msg="TearDown network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\" successfully" May 15 01:18:45.392453 containerd[2704]: time="2025-05-15T01:18:45.392452186Z" level=info msg="StopPodSandbox for \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\" returns successfully" May 15 01:18:45.392646 containerd[2704]: time="2025-05-15T01:18:45.392631995Z" level=info msg="StopPodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\"" May 15 01:18:45.392715 containerd[2704]: time="2025-05-15T01:18:45.392705182Z" level=info msg="TearDown network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" successfully" May 15 01:18:45.392739 containerd[2704]: time="2025-05-15T01:18:45.392716340Z" level=info msg="StopPodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" returns successfully" May 15 01:18:45.392892 containerd[2704]: time="2025-05-15T01:18:45.392874553Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" May 15 01:18:45.392970 containerd[2704]: time="2025-05-15T01:18:45.392955978Z" level=info msg="TearDown network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" successfully" May 15 01:18:45.392998 containerd[2704]: time="2025-05-15T01:18:45.392970336Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" returns successfully" May 15 01:18:45.393183 containerd[2704]: time="2025-05-15T01:18:45.393164022Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:18:45.393261 containerd[2704]: time="2025-05-15T01:18:45.393249567Z" level=info msg="TearDown network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" successfully" May 15 01:18:45.393286 containerd[2704]: time="2025-05-15T01:18:45.393262125Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" returns successfully" May 15 01:18:45.393609 containerd[2704]: time="2025-05-15T01:18:45.393590548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:4,}" May 15 01:18:45.394566 kubelet[4234]: I0515 01:18:45.394550 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61" May 15 01:18:45.394879 containerd[2704]: time="2025-05-15T01:18:45.394863047Z" level=info msg="StopPodSandbox for \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\"" May 15 01:18:45.395035 containerd[2704]: time="2025-05-15T01:18:45.395022739Z" level=info msg="Ensure that sandbox e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61 in task-service has been cleanup successfully" May 15 01:18:45.395362 containerd[2704]: time="2025-05-15T01:18:45.395346883Z" level=info msg="TearDown network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\" successfully" May 15 01:18:45.395383 containerd[2704]: time="2025-05-15T01:18:45.395361601Z" level=info msg="StopPodSandbox for \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\" returns successfully" May 15 01:18:45.395699 containerd[2704]: time="2025-05-15T01:18:45.395684824Z" level=info msg="StopPodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\"" May 15 01:18:45.395799 containerd[2704]: time="2025-05-15T01:18:45.395779888Z" level=info msg="TearDown network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" successfully" May 15 01:18:45.395821 containerd[2704]: time="2025-05-15T01:18:45.395799884Z" level=info msg="StopPodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" returns successfully" May 15 01:18:45.396026 containerd[2704]: time="2025-05-15T01:18:45.396009688Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" May 15 01:18:45.396097 containerd[2704]: time="2025-05-15T01:18:45.396087035Z" level=info msg="TearDown network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" successfully" May 15 01:18:45.396118 containerd[2704]: time="2025-05-15T01:18:45.396098313Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" returns successfully" May 15 01:18:45.396253 containerd[2704]: time="2025-05-15T01:18:45.396239568Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:18:45.396321 containerd[2704]: time="2025-05-15T01:18:45.396303757Z" level=info msg="TearDown network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" successfully" May 15 01:18:45.396321 containerd[2704]: time="2025-05-15T01:18:45.396313075Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" returns successfully" May 15 01:18:45.396364 kubelet[4234]: I0515 01:18:45.396320 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf" May 15 01:18:45.396600 containerd[2704]: time="2025-05-15T01:18:45.396585548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:4,}" May 15 01:18:45.396894 containerd[2704]: time="2025-05-15T01:18:45.396879617Z" level=info msg="StopPodSandbox for \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\"" May 15 01:18:45.397015 containerd[2704]: time="2025-05-15T01:18:45.397003755Z" level=info msg="Ensure that sandbox 2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf in task-service has been cleanup successfully" May 15 01:18:45.397160 containerd[2704]: time="2025-05-15T01:18:45.397147810Z" level=info msg="TearDown network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\" successfully" May 15 01:18:45.397183 containerd[2704]: time="2025-05-15T01:18:45.397160568Z" level=info msg="StopPodSandbox for \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\" returns successfully" May 15 01:18:45.397347 containerd[2704]: time="2025-05-15T01:18:45.397333218Z" level=info msg="StopPodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\"" May 15 01:18:45.397412 containerd[2704]: time="2025-05-15T01:18:45.397402366Z" level=info msg="TearDown network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" successfully" May 15 01:18:45.397433 containerd[2704]: time="2025-05-15T01:18:45.397412644Z" level=info msg="StopPodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" returns successfully" May 15 01:18:45.397590 containerd[2704]: time="2025-05-15T01:18:45.397574296Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" May 15 01:18:45.397659 containerd[2704]: time="2025-05-15T01:18:45.397649003Z" level=info msg="TearDown network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" successfully" May 15 01:18:45.397685 containerd[2704]: time="2025-05-15T01:18:45.397659881Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" returns successfully" May 15 01:18:45.397856 containerd[2704]: time="2025-05-15T01:18:45.397835651Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:18:45.397924 containerd[2704]: time="2025-05-15T01:18:45.397914517Z" level=info msg="TearDown network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" successfully" May 15 01:18:45.397945 containerd[2704]: time="2025-05-15T01:18:45.397925115Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" returns successfully" May 15 01:18:45.398129 kubelet[4234]: I0515 01:18:45.398112 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1" May 15 01:18:45.398245 containerd[2704]: time="2025-05-15T01:18:45.398228903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:4,}" May 15 01:18:45.398508 containerd[2704]: time="2025-05-15T01:18:45.398486498Z" level=info msg="StopPodSandbox for \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\"" May 15 01:18:45.398654 containerd[2704]: time="2025-05-15T01:18:45.398642071Z" level=info msg="Ensure that sandbox d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1 in task-service has been cleanup successfully" May 15 01:18:45.398827 containerd[2704]: time="2025-05-15T01:18:45.398813601Z" level=info msg="TearDown network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\" successfully" May 15 01:18:45.398849 containerd[2704]: time="2025-05-15T01:18:45.398828358Z" level=info msg="StopPodSandbox for \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\" returns successfully" May 15 01:18:45.399079 containerd[2704]: time="2025-05-15T01:18:45.399059638Z" level=info msg="StopPodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\"" May 15 01:18:45.399159 containerd[2704]: time="2025-05-15T01:18:45.399147583Z" level=info msg="TearDown network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" successfully" May 15 01:18:45.399191 containerd[2704]: time="2025-05-15T01:18:45.399160141Z" level=info msg="StopPodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" returns successfully" May 15 01:18:45.399215 kubelet[4234]: I0515 01:18:45.399181 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3" May 15 01:18:45.399393 containerd[2704]: time="2025-05-15T01:18:45.399371064Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" May 15 01:18:45.399485 containerd[2704]: time="2025-05-15T01:18:45.399473326Z" level=info msg="TearDown network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" successfully" May 15 01:18:45.399506 containerd[2704]: time="2025-05-15T01:18:45.399485644Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" returns successfully" May 15 01:18:45.399849 containerd[2704]: time="2025-05-15T01:18:45.399829625Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:18:45.399882 containerd[2704]: time="2025-05-15T01:18:45.399850901Z" level=info msg="StopPodSandbox for \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\"" May 15 01:18:45.399927 containerd[2704]: time="2025-05-15T01:18:45.399916050Z" level=info msg="TearDown network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" successfully" May 15 01:18:45.399949 containerd[2704]: time="2025-05-15T01:18:45.399927008Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" returns successfully" May 15 01:18:45.400010 containerd[2704]: time="2025-05-15T01:18:45.399996156Z" level=info msg="Ensure that sandbox 83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3 in task-service has been cleanup successfully" May 15 01:18:45.400229 containerd[2704]: time="2025-05-15T01:18:45.400214758Z" level=info msg="TearDown network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\" successfully" May 15 01:18:45.400250 containerd[2704]: time="2025-05-15T01:18:45.400229235Z" level=info msg="StopPodSandbox for \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\" returns successfully" May 15 01:18:45.400279 containerd[2704]: time="2025-05-15T01:18:45.400261909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:4,}" May 15 01:18:45.400427 containerd[2704]: time="2025-05-15T01:18:45.400413123Z" level=info msg="StopPodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\"" May 15 01:18:45.400491 containerd[2704]: time="2025-05-15T01:18:45.400481151Z" level=info msg="TearDown network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" successfully" May 15 01:18:45.400511 containerd[2704]: time="2025-05-15T01:18:45.400491390Z" level=info msg="StopPodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" returns successfully" May 15 01:18:45.400850 containerd[2704]: time="2025-05-15T01:18:45.400832250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9rx,Uid:afd4a0f0-7568-4656-98b5-f7eb614f688a,Namespace:calico-system,Attempt:2,}" May 15 01:18:45.400962 kubelet[4234]: I0515 01:18:45.400948 4234 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b" May 15 01:18:45.401307 containerd[2704]: time="2025-05-15T01:18:45.401289691Z" level=info msg="StopPodSandbox for \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\"" May 15 01:18:45.401471 containerd[2704]: time="2025-05-15T01:18:45.401458382Z" level=info msg="Ensure that sandbox 6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b in task-service has been cleanup successfully" May 15 01:18:45.401654 containerd[2704]: time="2025-05-15T01:18:45.401640590Z" level=info msg="TearDown network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\" successfully" May 15 01:18:45.401675 containerd[2704]: time="2025-05-15T01:18:45.401654908Z" level=info msg="StopPodSandbox for \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\" returns successfully" May 15 01:18:45.401890 containerd[2704]: time="2025-05-15T01:18:45.401870430Z" level=info msg="StopPodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\"" May 15 01:18:45.401966 containerd[2704]: time="2025-05-15T01:18:45.401954695Z" level=info msg="TearDown network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" successfully" May 15 01:18:45.401988 containerd[2704]: time="2025-05-15T01:18:45.401966293Z" level=info msg="StopPodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" returns successfully" May 15 01:18:45.402164 containerd[2704]: time="2025-05-15T01:18:45.402146142Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" May 15 01:18:45.402239 containerd[2704]: time="2025-05-15T01:18:45.402228408Z" level=info msg="TearDown network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" successfully" May 15 01:18:45.402260 containerd[2704]: time="2025-05-15T01:18:45.402239006Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" returns successfully" May 15 01:18:45.402487 containerd[2704]: time="2025-05-15T01:18:45.402466087Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:18:45.402570 containerd[2704]: time="2025-05-15T01:18:45.402557751Z" level=info msg="TearDown network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" successfully" May 15 01:18:45.402595 containerd[2704]: time="2025-05-15T01:18:45.402570628Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" returns successfully" May 15 01:18:45.402978 containerd[2704]: time="2025-05-15T01:18:45.402954362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:4,}" May 15 01:18:45.404309 kubelet[4234]: I0515 01:18:45.404181 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qkcpw" podStartSLOduration=1.471008592 podStartE2EDuration="8.404153993s" podCreationTimestamp="2025-05-15 01:18:37 +0000 UTC" firstStartedPulling="2025-05-15 01:18:38.207305034 +0000 UTC m=+11.944828680" lastFinishedPulling="2025-05-15 01:18:45.140450395 +0000 UTC m=+18.877974081" observedRunningTime="2025-05-15 01:18:45.403203878 +0000 UTC m=+19.140727564" watchObservedRunningTime="2025-05-15 01:18:45.404153993 +0000 UTC m=+19.141677639" May 15 01:18:45.557154 systemd-networkd[2613]: cali397e3dbc3a2: Link UP May 15 01:18:45.557322 systemd-networkd[2613]: cali397e3dbc3a2: Gained carrier May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.417 [INFO][6664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.440 [INFO][6664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0 calico-apiserver-65cbd4f5f8- calico-apiserver af3de368-e3ec-4d89-8716-a98767f6e3d9 650 0 2025-05-15 01:18:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65cbd4f5f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-c9ea1a9895 calico-apiserver-65cbd4f5f8-4mgvv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali397e3dbc3a2 [] []}} ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.440 [INFO][6664] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.476 [INFO][6821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" HandleID="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" HandleID="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019de00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"calico-apiserver-65cbd4f5f8-4mgvv", "timestamp":"2025-05-15 01:18:45.476261949 +0000 UTC"}, Hostname:"ci-4230.1.1-n-c9ea1a9895", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6821] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-c9ea1a9895' May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.505 [INFO][6821] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.508 [INFO][6821] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.510 [INFO][6821] ipam/ipam.go 521: Ran out of existing affine blocks for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.512 [INFO][6821] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.513 [INFO][6821] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.513 [INFO][6821] ipam/ipam.go 550: Found unclaimed block host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.513 [INFO][6821] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.516 [INFO][6821] ipam/ipam_block_reader_writer.go 182: Block affinity already exists, getting existing affinity host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.517 [INFO][6821] ipam/ipam_block_reader_writer.go 190: Got existing affinity host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.517 [INFO][6821] ipam/ipam_block_reader_writer.go 194: Marking existing affinity with current state pending as pending host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.519 [INFO][6821] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.520 [INFO][6821] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.522 [INFO][6821] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.523 [INFO][6821] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.523 [INFO][6821] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.526 [INFO][6821] ipam/ipam_block_reader_writer.go 228: The block already exists, getting it from data store host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.528 [INFO][6821] ipam/ipam_block_reader_writer.go 244: Block is already claimed by this host, confirm the affinity host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.564748 containerd[2704]: 2025-05-15 01:18:45.528 [INFO][6821] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.529 [ERROR][6821] ipam/customresource.go 183: Error updating resource Key=BlockAffinity(ci-4230.1.1-n-c9ea1a9895-192-168-12-64-26) Name="ci-4230.1.1-n-c9ea1a9895-192-168-12-64-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"ci-4230.1.1-n-c9ea1a9895-192-168-12-64-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"ci-4230.1.1-n-c9ea1a9895", CIDR:"192.168.12.64/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "ci-4230.1.1-n-c9ea1a9895-192-168-12-64-26": the object has been modified; please apply your changes to the latest version and try again May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.531 [INFO][6821] ipam/ipam_block_reader_writer.go 284: Affinity is already confirmed host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.531 [INFO][6821] ipam/ipam.go 585: Block '192.168.12.64/26' has 64 free ips which is more than 1 ips required. host="ci-4230.1.1-n-c9ea1a9895" subnet=192.168.12.64/26 May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.531 [INFO][6821] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.532 [INFO][6821] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4 May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.534 [INFO][6821] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.536 [ERROR][6821] ipam/customresource.go 183: Error updating resource Key=IPAMBlock(192-168-12-64-26) Name="192-168-12-64-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-12-64-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.12.64/26", Affinity:(*string)(0x40006a8240), Allocations:[]*int{(*int)(0x40006170a8), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0x400019de00), AttrSecondary:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"calico-apiserver-65cbd4f5f8-4mgvv", "timestamp":"2025-05-15 01:18:45.476261949 +0000 UTC"}}}, SequenceNumber:0x183f8e8f0ea6ca9a, SequenceNumberForAllocation:map[string]uint64{"0":0x183f8e8f0ea6ca99}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-12-64-26": the object has been modified; please apply your changes to the latest version and try again May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.536 [INFO][6821] ipam/ipam.go 1207: Failed to update block block=192.168.12.64/26 error=update conflict: IPAMBlock(192-168-12-64-26) handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.544 [INFO][6821] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.545 [INFO][6821] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4 May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.547 [INFO][6821] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.550 [INFO][6821] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.65/26] block=192.168.12.64/26 handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565338 containerd[2704]: 2025-05-15 01:18:45.550 [INFO][6821] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.65/26] handle="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.550 [INFO][6821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.550 [INFO][6821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.65/26] IPv6=[] ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" HandleID="k8s-pod-network.228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.551 [INFO][6664] cni-plugin/k8s.go 386: Populated endpoint ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0", GenerateName:"calico-apiserver-65cbd4f5f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"af3de368-e3ec-4d89-8716-a98767f6e3d9", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65cbd4f5f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"", Pod:"calico-apiserver-65cbd4f5f8-4mgvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali397e3dbc3a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.552 [INFO][6664] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.65/32] ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.552 [INFO][6664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali397e3dbc3a2 ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.557 [INFO][6664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.557 [INFO][6664] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0", GenerateName:"calico-apiserver-65cbd4f5f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"af3de368-e3ec-4d89-8716-a98767f6e3d9", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65cbd4f5f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4", Pod:"calico-apiserver-65cbd4f5f8-4mgvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali397e3dbc3a2", MAC:"ce:f5:c8:5f:09:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.565637 containerd[2704]: 2025-05-15 01:18:45.563 [INFO][6664] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-4mgvv" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--4mgvv-eth0" May 15 01:18:45.579004 containerd[2704]: time="2025-05-15T01:18:45.578937636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:45.579004 containerd[2704]: time="2025-05-15T01:18:45.578985748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:45.579004 containerd[2704]: time="2025-05-15T01:18:45.578996346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.579110 containerd[2704]: time="2025-05-15T01:18:45.579062735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.607087 systemd[1]: Started cri-containerd-228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4.scope - libcontainer container 228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4. May 15 01:18:45.618523 systemd-networkd[2613]: calib3f4012131d: Link UP May 15 01:18:45.618741 systemd-networkd[2613]: calib3f4012131d: Gained carrier May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.437 [INFO][6730] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.447 [INFO][6730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0 csi-node-driver- calico-system afd4a0f0-7568-4656-98b5-f7eb614f688a 578 0 2025-05-15 01:18:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230.1.1-n-c9ea1a9895 csi-node-driver-jm9rx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib3f4012131d [] []}} ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.447 [INFO][6730] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.476 [INFO][6843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" HandleID="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" HandleID="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000388c30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"csi-node-driver-jm9rx", "timestamp":"2025-05-15 01:18:45.476267548 +0000 UTC"}, Hostname:"ci-4230.1.1-n-c9ea1a9895", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.550 [INFO][6843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.550 [INFO][6843] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-c9ea1a9895' May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.551 [INFO][6843] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.602 [INFO][6843] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.606 [INFO][6843] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.607 [INFO][6843] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.609 [INFO][6843] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.609 [INFO][6843] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.610 [INFO][6843] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5 May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.612 [INFO][6843] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.616 [INFO][6843] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.66/26] block=192.168.12.64/26 handle="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.616 [INFO][6843] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.66/26] handle="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.616 [INFO][6843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:18:45.625819 containerd[2704]: 2025-05-15 01:18:45.616 [INFO][6843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.66/26] IPv6=[] ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" HandleID="k8s-pod-network.af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.626336 containerd[2704]: 2025-05-15 01:18:45.617 [INFO][6730] cni-plugin/k8s.go 386: Populated endpoint ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afd4a0f0-7568-4656-98b5-f7eb614f688a", ResourceVersion:"578", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"", Pod:"csi-node-driver-jm9rx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3f4012131d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.626336 containerd[2704]: 2025-05-15 01:18:45.617 [INFO][6730] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.66/32] ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.626336 containerd[2704]: 2025-05-15 01:18:45.617 [INFO][6730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3f4012131d ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.626336 containerd[2704]: 2025-05-15 01:18:45.618 [INFO][6730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.626336 containerd[2704]: 2025-05-15 01:18:45.618 [INFO][6730] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afd4a0f0-7568-4656-98b5-f7eb614f688a", ResourceVersion:"578", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5", Pod:"csi-node-driver-jm9rx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3f4012131d", MAC:"6e:c6:5b:e9:ce:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.626336 containerd[2704]: 2025-05-15 01:18:45.624 [INFO][6730] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5" Namespace="calico-system" Pod="csi-node-driver-jm9rx" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-csi--node--driver--jm9rx-eth0" May 15 01:18:45.631137 containerd[2704]: time="2025-05-15T01:18:45.631109735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-4mgvv,Uid:af3de368-e3ec-4d89-8716-a98767f6e3d9,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4\"" May 15 01:18:45.632276 containerd[2704]: time="2025-05-15T01:18:45.632253936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 01:18:45.639108 containerd[2704]: time="2025-05-15T01:18:45.639036318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:45.639132 containerd[2704]: time="2025-05-15T01:18:45.639113825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:45.639152 containerd[2704]: time="2025-05-15T01:18:45.639126782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.639225 containerd[2704]: time="2025-05-15T01:18:45.639207928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.662984 systemd[1]: Started cri-containerd-af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5.scope - libcontainer container af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5. May 15 01:18:45.679156 containerd[2704]: time="2025-05-15T01:18:45.679120796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9rx,Uid:afd4a0f0-7568-4656-98b5-f7eb614f688a,Namespace:calico-system,Attempt:2,} returns sandbox id \"af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5\"" May 15 01:18:45.723852 systemd-networkd[2613]: cali15910d5f8e3: Link UP May 15 01:18:45.724236 systemd-networkd[2613]: cali15910d5f8e3: Gained carrier May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.439 [INFO][6751] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.451 [INFO][6751] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0 calico-apiserver-65cbd4f5f8- calico-apiserver 5acd0bbb-0e0f-417e-9812-ec35b6c162b3 651 0 2025-05-15 01:18:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65cbd4f5f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-c9ea1a9895 calico-apiserver-65cbd4f5f8-cmr4c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15910d5f8e3 [] []}} ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.451 [INFO][6751] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.476 [INFO][6854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" HandleID="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" HandleID="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000598ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"calico-apiserver-65cbd4f5f8-cmr4c", "timestamp":"2025-05-15 01:18:45.476262269 +0000 UTC"}, Hostname:"ci-4230.1.1-n-c9ea1a9895", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.616 [INFO][6854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.616 [INFO][6854] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-c9ea1a9895' May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.653 [INFO][6854] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.703 [INFO][6854] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.706 [INFO][6854] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.711 [INFO][6854] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.713 [INFO][6854] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.713 [INFO][6854] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.714 [INFO][6854] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2 May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.716 [INFO][6854] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.721 [INFO][6854] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.67/26] block=192.168.12.64/26 handle="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.721 [INFO][6854] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.67/26] handle="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.721 [INFO][6854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:18:45.730883 containerd[2704]: 2025-05-15 01:18:45.721 [INFO][6854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.67/26] IPv6=[] ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" HandleID="k8s-pod-network.3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.731363 containerd[2704]: 2025-05-15 01:18:45.722 [INFO][6751] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0", GenerateName:"calico-apiserver-65cbd4f5f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5acd0bbb-0e0f-417e-9812-ec35b6c162b3", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65cbd4f5f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"", Pod:"calico-apiserver-65cbd4f5f8-cmr4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15910d5f8e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.731363 containerd[2704]: 2025-05-15 01:18:45.722 [INFO][6751] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.67/32] ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.731363 containerd[2704]: 2025-05-15 01:18:45.722 [INFO][6751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15910d5f8e3 ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.731363 containerd[2704]: 2025-05-15 01:18:45.724 [INFO][6751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.731363 containerd[2704]: 2025-05-15 01:18:45.724 [INFO][6751] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0", GenerateName:"calico-apiserver-65cbd4f5f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5acd0bbb-0e0f-417e-9812-ec35b6c162b3", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65cbd4f5f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2", Pod:"calico-apiserver-65cbd4f5f8-cmr4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15910d5f8e3", MAC:"36:b5:ed:3c:f8:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.731363 containerd[2704]: 2025-05-15 01:18:45.729 [INFO][6751] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2" Namespace="calico-apiserver" Pod="calico-apiserver-65cbd4f5f8-cmr4c" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--apiserver--65cbd4f5f8--cmr4c-eth0" May 15 01:18:45.744728 containerd[2704]: time="2025-05-15T01:18:45.744627059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:45.744772 containerd[2704]: time="2025-05-15T01:18:45.744724442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:45.744772 containerd[2704]: time="2025-05-15T01:18:45.744736560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.744872 containerd[2704]: time="2025-05-15T01:18:45.744814986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.764051 systemd[1]: Started cri-containerd-3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2.scope - libcontainer container 3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2. May 15 01:18:45.786942 containerd[2704]: time="2025-05-15T01:18:45.786914154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cbd4f5f8-cmr4c,Uid:5acd0bbb-0e0f-417e-9812-ec35b6c162b3,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2\"" May 15 01:18:45.820660 systemd-networkd[2613]: calibceaa597969: Link UP May 15 01:18:45.820790 systemd-networkd[2613]: calibceaa597969: Gained carrier May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.433 [INFO][6700] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.446 [INFO][6700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0 coredns-668d6bf9bc- kube-system b65fead4-b7f0-4596-810d-a44b901458bf 648 0 2025-05-15 01:18:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-c9ea1a9895 coredns-668d6bf9bc-msqz5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibceaa597969 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.446 [INFO][6700] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.476 [INFO][6838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" HandleID="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" HandleID="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003889d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"coredns-668d6bf9bc-msqz5", "timestamp":"2025-05-15 01:18:45.476265869 +0000 UTC"}, Hostname:"ci-4230.1.1-n-c9ea1a9895", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.721 [INFO][6838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.721 [INFO][6838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-c9ea1a9895' May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.754 [INFO][6838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.803 [INFO][6838] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.808 [INFO][6838] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.809 [INFO][6838] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.811 [INFO][6838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.811 [INFO][6838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.812 [INFO][6838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.814 [INFO][6838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.818 [INFO][6838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.68/26] block=192.168.12.64/26 handle="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.818 [INFO][6838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.68/26] handle="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.818 [INFO][6838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:18:45.835873 containerd[2704]: 2025-05-15 01:18:45.818 [INFO][6838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.68/26] IPv6=[] ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" HandleID="k8s-pod-network.595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.836293 containerd[2704]: 2025-05-15 01:18:45.819 [INFO][6700] cni-plugin/k8s.go 386: Populated endpoint ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b65fead4-b7f0-4596-810d-a44b901458bf", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"", Pod:"coredns-668d6bf9bc-msqz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibceaa597969", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.836293 containerd[2704]: 2025-05-15 01:18:45.819 [INFO][6700] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.68/32] ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.836293 containerd[2704]: 2025-05-15 01:18:45.819 [INFO][6700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibceaa597969 ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.836293 containerd[2704]: 2025-05-15 01:18:45.820 [INFO][6700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.836293 containerd[2704]: 2025-05-15 01:18:45.821 [INFO][6700] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b65fead4-b7f0-4596-810d-a44b901458bf", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a", Pod:"coredns-668d6bf9bc-msqz5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibceaa597969", MAC:"12:36:9d:81:93:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.836293 containerd[2704]: 2025-05-15 01:18:45.834 [INFO][6700] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a" Namespace="kube-system" Pod="coredns-668d6bf9bc-msqz5" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--msqz5-eth0" May 15 01:18:45.849721 containerd[2704]: time="2025-05-15T01:18:45.849653977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:45.849744 containerd[2704]: time="2025-05-15T01:18:45.849713607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:45.849744 containerd[2704]: time="2025-05-15T01:18:45.849723925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.849808 containerd[2704]: time="2025-05-15T01:18:45.849793273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.872039 systemd[1]: Started cri-containerd-595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a.scope - libcontainer container 595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a. May 15 01:18:45.895065 containerd[2704]: time="2025-05-15T01:18:45.895038415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msqz5,Uid:b65fead4-b7f0-4596-810d-a44b901458bf,Namespace:kube-system,Attempt:4,} returns sandbox id \"595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a\"" May 15 01:18:45.897093 containerd[2704]: time="2025-05-15T01:18:45.897071381Z" level=info msg="CreateContainer within sandbox \"595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 01:18:45.902743 containerd[2704]: time="2025-05-15T01:18:45.902714361Z" level=info msg="CreateContainer within sandbox \"595f160b4812e9b7ca38194ba061644290a5e1c5a7405d20ce044f65a486e18a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eabe3f33fb1b7cbf2f4afed287c44b807387ac898f8c47ba384125f1e326eb62\"" May 15 01:18:45.903080 containerd[2704]: time="2025-05-15T01:18:45.903056142Z" level=info msg="StartContainer for \"eabe3f33fb1b7cbf2f4afed287c44b807387ac898f8c47ba384125f1e326eb62\"" May 15 01:18:45.922454 systemd-networkd[2613]: cali24da76d6c0f: Link UP May 15 01:18:45.922619 systemd-networkd[2613]: cali24da76d6c0f: Gained carrier May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.432 [INFO][6722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.446 [INFO][6722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0 coredns-668d6bf9bc- kube-system 65547478-e6ce-4256-b233-a32124af49fe 645 0 2025-05-15 01:18:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-c9ea1a9895 coredns-668d6bf9bc-9rxxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24da76d6c0f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.446 [INFO][6722] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.476 [INFO][6836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" HandleID="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" HandleID="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000785bb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"coredns-668d6bf9bc-9rxxw", "timestamp":"2025-05-15 01:18:45.476264989 +0000 UTC"}, Hostname:"ci-4230.1.1-n-c9ea1a9895", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.503 [INFO][6836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.818 [INFO][6836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.818 [INFO][6836] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-c9ea1a9895' May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.854 [INFO][6836] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.903 [INFO][6836] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.908 [INFO][6836] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.909 [INFO][6836] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.911 [INFO][6836] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.911 [INFO][6836] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.912 [INFO][6836] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.915 [INFO][6836] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.919 [INFO][6836] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.69/26] block=192.168.12.64/26 handle="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.919 [INFO][6836] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.69/26] handle="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.919 [INFO][6836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:18:45.929310 containerd[2704]: 2025-05-15 01:18:45.919 [INFO][6836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.69/26] IPv6=[] ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" HandleID="k8s-pod-network.95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.929769 containerd[2704]: 2025-05-15 01:18:45.921 [INFO][6722] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"65547478-e6ce-4256-b233-a32124af49fe", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"", Pod:"coredns-668d6bf9bc-9rxxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24da76d6c0f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.929769 containerd[2704]: 2025-05-15 01:18:45.921 [INFO][6722] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.69/32] ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.929769 containerd[2704]: 2025-05-15 01:18:45.921 [INFO][6722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24da76d6c0f ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.929769 containerd[2704]: 2025-05-15 01:18:45.922 [INFO][6722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.929769 containerd[2704]: 2025-05-15 01:18:45.922 [INFO][6722] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"65547478-e6ce-4256-b233-a32124af49fe", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f", Pod:"coredns-668d6bf9bc-9rxxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24da76d6c0f", MAC:"e6:b0:0e:86:0c:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:45.929769 containerd[2704]: 2025-05-15 01:18:45.928 [INFO][6722] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9rxxw" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-coredns--668d6bf9bc--9rxxw-eth0" May 15 01:18:45.938985 systemd[1]: Started cri-containerd-eabe3f33fb1b7cbf2f4afed287c44b807387ac898f8c47ba384125f1e326eb62.scope - libcontainer container eabe3f33fb1b7cbf2f4afed287c44b807387ac898f8c47ba384125f1e326eb62. May 15 01:18:45.943483 containerd[2704]: time="2025-05-15T01:18:45.943425291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:45.943483 containerd[2704]: time="2025-05-15T01:18:45.943474482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:45.943565 containerd[2704]: time="2025-05-15T01:18:45.943486360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.943587 containerd[2704]: time="2025-05-15T01:18:45.943558627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:45.953256 systemd[1]: Started cri-containerd-95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f.scope - libcontainer container 95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f. May 15 01:18:45.957282 containerd[2704]: time="2025-05-15T01:18:45.957251529Z" level=info msg="StartContainer for \"eabe3f33fb1b7cbf2f4afed287c44b807387ac898f8c47ba384125f1e326eb62\" returns successfully" May 15 01:18:45.977467 containerd[2704]: time="2025-05-15T01:18:45.977436903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9rxxw,Uid:65547478-e6ce-4256-b233-a32124af49fe,Namespace:kube-system,Attempt:4,} returns sandbox id \"95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f\"" May 15 01:18:45.979477 containerd[2704]: time="2025-05-15T01:18:45.979449074Z" level=info msg="CreateContainer within sandbox \"95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 01:18:45.985720 containerd[2704]: time="2025-05-15T01:18:45.985692669Z" level=info msg="CreateContainer within sandbox \"95e7bd1043363c99e8f43a710b803f5463f91ed9a998aeffd3d02e5cb1b9103f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fb65f008b0c8c90e59e5a8403be7e80a5f64ee0e39051afdf3f912182112016\"" May 15 01:18:45.986074 containerd[2704]: time="2025-05-15T01:18:45.986045928Z" level=info msg="StartContainer for \"6fb65f008b0c8c90e59e5a8403be7e80a5f64ee0e39051afdf3f912182112016\"" May 15 01:18:46.011416 systemd[1]: run-netns-cni\x2d43b952c7\x2dc8f7\x2d2be0\x2d80bd\x2d72940d2e36d3.mount: Deactivated successfully. May 15 01:18:46.011496 systemd[1]: run-netns-cni\x2dafcf21da\x2dd5f1\x2d6c98\x2ddfe0\x2ddaf275d0ce64.mount: Deactivated successfully. May 15 01:18:46.011540 systemd[1]: run-netns-cni\x2df0c9c50c\x2d69b8\x2d009a\x2dd14a\x2d2e2b77355974.mount: Deactivated successfully. May 15 01:18:46.011585 systemd[1]: run-netns-cni\x2d06b8ab11\x2d2c48\x2d6d10\x2d6566\x2dc753abde93d7.mount: Deactivated successfully. May 15 01:18:46.011627 systemd[1]: run-netns-cni\x2d57df1326\x2d1817\x2d659b\x2d7107\x2d1b5031f961e5.mount: Deactivated successfully. May 15 01:18:46.011670 systemd[1]: run-netns-cni\x2db37d54cd\x2df732\x2dcb69\x2d5f80\x2d28faa4de124a.mount: Deactivated successfully. May 15 01:18:46.022241 systemd-networkd[2613]: cali974f071940f: Link UP May 15 01:18:46.022633 systemd-networkd[2613]: cali974f071940f: Gained carrier May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.430 [INFO][6674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.441 [INFO][6674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0 calico-kube-controllers-df45848cb- calico-system a3063761-b690-49aa-b309-e4b3ccc5c48d 649 0 2025-05-15 01:18:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:df45848cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230.1.1-n-c9ea1a9895 calico-kube-controllers-df45848cb-96g89 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali974f071940f [] []}} ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.441 [INFO][6674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.476 [INFO][6824] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" HandleID="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.504 [INFO][6824] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" HandleID="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b3dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-c9ea1a9895", "pod":"calico-kube-controllers-df45848cb-96g89", "timestamp":"2025-05-15 01:18:45.476263429 +0000 UTC"}, Hostname:"ci-4230.1.1-n-c9ea1a9895", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.504 [INFO][6824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.919 [INFO][6824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.919 [INFO][6824] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-c9ea1a9895' May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:45.955 [INFO][6824] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.003 [INFO][6824] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.009 [INFO][6824] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.010 [INFO][6824] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.011 [INFO][6824] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.012 [INFO][6824] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.013 [INFO][6824] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644 May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.015 [INFO][6824] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.019 [INFO][6824] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.70/26] block=192.168.12.64/26 handle="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.019 [INFO][6824] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.70/26] handle="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" host="ci-4230.1.1-n-c9ea1a9895" May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.019 [INFO][6824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:18:46.029184 containerd[2704]: 2025-05-15 01:18:46.019 [INFO][6824] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.70/26] IPv6=[] ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" HandleID="k8s-pod-network.c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Workload="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.029626 containerd[2704]: 2025-05-15 01:18:46.021 [INFO][6674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0", GenerateName:"calico-kube-controllers-df45848cb-", Namespace:"calico-system", SelfLink:"", UID:"a3063761-b690-49aa-b309-e4b3ccc5c48d", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df45848cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"", Pod:"calico-kube-controllers-df45848cb-96g89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali974f071940f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:46.029626 containerd[2704]: 2025-05-15 01:18:46.021 [INFO][6674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.70/32] ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.029626 containerd[2704]: 2025-05-15 01:18:46.021 [INFO][6674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali974f071940f ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.029626 containerd[2704]: 2025-05-15 01:18:46.022 [INFO][6674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.029626 containerd[2704]: 2025-05-15 01:18:46.022 [INFO][6674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0", GenerateName:"calico-kube-controllers-df45848cb-", Namespace:"calico-system", SelfLink:"", UID:"a3063761-b690-49aa-b309-e4b3ccc5c48d", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 18, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df45848cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-c9ea1a9895", ContainerID:"c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644", Pod:"calico-kube-controllers-df45848cb-96g89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali974f071940f", MAC:"be:c5:7f:89:10:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:18:46.029626 containerd[2704]: 2025-05-15 01:18:46.027 [INFO][6674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644" Namespace="calico-system" Pod="calico-kube-controllers-df45848cb-96g89" WorkloadEndpoint="ci--4230.1.1--n--c9ea1a9895-k8s-calico--kube--controllers--df45848cb--96g89-eth0" May 15 01:18:46.031051 systemd[1]: Started cri-containerd-6fb65f008b0c8c90e59e5a8403be7e80a5f64ee0e39051afdf3f912182112016.scope - libcontainer container 6fb65f008b0c8c90e59e5a8403be7e80a5f64ee0e39051afdf3f912182112016. May 15 01:18:46.045287 containerd[2704]: time="2025-05-15T01:18:46.045224041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:18:46.045321 containerd[2704]: time="2025-05-15T01:18:46.045282951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:18:46.045321 containerd[2704]: time="2025-05-15T01:18:46.045294789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:46.045898 containerd[2704]: time="2025-05-15T01:18:46.045372096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:18:46.048575 containerd[2704]: time="2025-05-15T01:18:46.048545700Z" level=info msg="StartContainer for \"6fb65f008b0c8c90e59e5a8403be7e80a5f64ee0e39051afdf3f912182112016\" returns successfully" May 15 01:18:46.075064 systemd[1]: Started cri-containerd-c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644.scope - libcontainer container c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644. May 15 01:18:46.099209 containerd[2704]: time="2025-05-15T01:18:46.099178935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df45848cb-96g89,Uid:a3063761-b690-49aa-b309-e4b3ccc5c48d,Namespace:calico-system,Attempt:4,} returns sandbox id \"c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644\"" May 15 01:18:46.412769 kubelet[4234]: I0515 01:18:46.412744 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:18:46.414857 kubelet[4234]: I0515 01:18:46.414819 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-msqz5" podStartSLOduration=15.414804823 podStartE2EDuration="15.414804823s" podCreationTimestamp="2025-05-15 01:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:18:46.414710238 +0000 UTC m=+20.152233924" watchObservedRunningTime="2025-05-15 01:18:46.414804823 +0000 UTC m=+20.152328469" May 15 01:18:46.421732 kubelet[4234]: I0515 01:18:46.421688 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9rxxw" podStartSLOduration=15.421673464 podStartE2EDuration="15.421673464s" podCreationTimestamp="2025-05-15 01:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:18:46.421486335 +0000 UTC m=+20.159010021" watchObservedRunningTime="2025-05-15 01:18:46.421673464 +0000 UTC m=+20.159197150" May 15 01:18:46.620542 containerd[2704]: time="2025-05-15T01:18:46.620502690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:46.620867 containerd[2704]: time="2025-05-15T01:18:46.620533285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 01:18:46.621230 containerd[2704]: time="2025-05-15T01:18:46.621207095Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:46.623116 containerd[2704]: time="2025-05-15T01:18:46.623092668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:46.623797 containerd[2704]: time="2025-05-15T01:18:46.623779956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 991.495706ms" May 15 01:18:46.623828 containerd[2704]: time="2025-05-15T01:18:46.623804952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 01:18:46.624641 containerd[2704]: time="2025-05-15T01:18:46.624589304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 01:18:46.626279 containerd[2704]: time="2025-05-15T01:18:46.626254153Z" level=info msg="CreateContainer within sandbox \"228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 01:18:46.631432 containerd[2704]: time="2025-05-15T01:18:46.631404514Z" level=info msg="CreateContainer within sandbox \"228414584cd449f95bdbe721b057620c511b5a4213e18ce5a6e2f45b934976d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b05b3740b4050e214768fcc9cb9095cc9a2b6aa0c9e51212834f12b166a9f19c\"" May 15 01:18:46.631767 containerd[2704]: time="2025-05-15T01:18:46.631746379Z" level=info msg="StartContainer for \"b05b3740b4050e214768fcc9cb9095cc9a2b6aa0c9e51212834f12b166a9f19c\"" May 15 01:18:46.672011 systemd[1]: Started cri-containerd-b05b3740b4050e214768fcc9cb9095cc9a2b6aa0c9e51212834f12b166a9f19c.scope - libcontainer container b05b3740b4050e214768fcc9cb9095cc9a2b6aa0c9e51212834f12b166a9f19c. May 15 01:18:46.681864 kernel: bpftool[7583]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 15 01:18:46.697479 containerd[2704]: time="2025-05-15T01:18:46.697449201Z" level=info msg="StartContainer for \"b05b3740b4050e214768fcc9cb9095cc9a2b6aa0c9e51212834f12b166a9f19c\" returns successfully" May 15 01:18:46.840639 systemd-networkd[2613]: vxlan.calico: Link UP May 15 01:18:46.840644 systemd-networkd[2613]: vxlan.calico: Gained carrier May 15 01:18:46.916973 systemd-networkd[2613]: calib3f4012131d: Gained IPv6LL May 15 01:18:47.047820 containerd[2704]: time="2025-05-15T01:18:47.047741521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:47.047984 containerd[2704]: time="2025-05-15T01:18:47.047787874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 01:18:47.048449 containerd[2704]: time="2025-05-15T01:18:47.048429536Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:47.050255 containerd[2704]: time="2025-05-15T01:18:47.050227621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:47.050936 containerd[2704]: time="2025-05-15T01:18:47.050910997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 426.287619ms" May 15 01:18:47.050960 containerd[2704]: time="2025-05-15T01:18:47.050943392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 01:18:47.051693 containerd[2704]: time="2025-05-15T01:18:47.051672641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 01:18:47.052558 containerd[2704]: time="2025-05-15T01:18:47.052534749Z" level=info msg="CreateContainer within sandbox \"af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 01:18:47.061405 containerd[2704]: time="2025-05-15T01:18:47.061371800Z" level=info msg="CreateContainer within sandbox \"af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"966349def30a2d203f7b2eabf32bbfce075c81fb314e0909f11c05f51c2c6b16\"" May 15 01:18:47.061804 containerd[2704]: time="2025-05-15T01:18:47.061773299Z" level=info msg="StartContainer for \"966349def30a2d203f7b2eabf32bbfce075c81fb314e0909f11c05f51c2c6b16\"" May 15 01:18:47.099970 systemd[1]: Started cri-containerd-966349def30a2d203f7b2eabf32bbfce075c81fb314e0909f11c05f51c2c6b16.scope - libcontainer container 966349def30a2d203f7b2eabf32bbfce075c81fb314e0909f11c05f51c2c6b16. May 15 01:18:47.121271 containerd[2704]: time="2025-05-15T01:18:47.121243221Z" level=info msg="StartContainer for \"966349def30a2d203f7b2eabf32bbfce075c81fb314e0909f11c05f51c2c6b16\" returns successfully" May 15 01:18:47.211099 containerd[2704]: time="2025-05-15T01:18:47.211063830Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:47.211182 containerd[2704]: time="2025-05-15T01:18:47.211111542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 01:18:47.213629 containerd[2704]: time="2025-05-15T01:18:47.213597083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 161.893568ms" May 15 01:18:47.213629 containerd[2704]: time="2025-05-15T01:18:47.213625559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 01:18:47.214324 containerd[2704]: time="2025-05-15T01:18:47.214306855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 01:18:47.215143 containerd[2704]: time="2025-05-15T01:18:47.215119451Z" level=info msg="CreateContainer within sandbox \"3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 01:18:47.219597 containerd[2704]: time="2025-05-15T01:18:47.219564452Z" level=info msg="CreateContainer within sandbox \"3cf6364b6fd1d41b16d95d5d37f3f887dbf4a7f87c345327daf18d53ffc152c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"079da62581337c575ab3ed883addbb2ae52b99137300c371e2659d2327369532\"" May 15 01:18:47.219935 containerd[2704]: time="2025-05-15T01:18:47.219916358Z" level=info msg="StartContainer for \"079da62581337c575ab3ed883addbb2ae52b99137300c371e2659d2327369532\"" May 15 01:18:47.236935 systemd-networkd[2613]: cali24da76d6c0f: Gained IPv6LL May 15 01:18:47.245031 systemd[1]: Started cri-containerd-079da62581337c575ab3ed883addbb2ae52b99137300c371e2659d2327369532.scope - libcontainer container 079da62581337c575ab3ed883addbb2ae52b99137300c371e2659d2327369532. May 15 01:18:47.269496 containerd[2704]: time="2025-05-15T01:18:47.269467874Z" level=info msg="StartContainer for \"079da62581337c575ab3ed883addbb2ae52b99137300c371e2659d2327369532\" returns successfully" May 15 01:18:47.364933 systemd-networkd[2613]: cali15910d5f8e3: Gained IPv6LL May 15 01:18:47.365167 systemd-networkd[2613]: calibceaa597969: Gained IPv6LL May 15 01:18:47.425710 kubelet[4234]: I0515 01:18:47.425661 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-cmr4c" podStartSLOduration=8.999201771 podStartE2EDuration="10.425645434s" podCreationTimestamp="2025-05-15 01:18:37 +0000 UTC" firstStartedPulling="2025-05-15 01:18:45.787736051 +0000 UTC m=+19.525259737" lastFinishedPulling="2025-05-15 01:18:47.214179714 +0000 UTC m=+20.951703400" observedRunningTime="2025-05-15 01:18:47.425457183 +0000 UTC m=+21.162980869" watchObservedRunningTime="2025-05-15 01:18:47.425645434 +0000 UTC m=+21.163169120" May 15 01:18:47.432650 kubelet[4234]: I0515 01:18:47.432607 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65cbd4f5f8-4mgvv" podStartSLOduration=9.440201897 podStartE2EDuration="10.432594053s" podCreationTimestamp="2025-05-15 01:18:37 +0000 UTC" firstStartedPulling="2025-05-15 01:18:45.632036734 +0000 UTC m=+19.369560420" lastFinishedPulling="2025-05-15 01:18:46.62442889 +0000 UTC m=+20.361952576" observedRunningTime="2025-05-15 01:18:47.432438037 +0000 UTC m=+21.169961723" watchObservedRunningTime="2025-05-15 01:18:47.432594053 +0000 UTC m=+21.170117779" May 15 01:18:47.493933 systemd-networkd[2613]: cali974f071940f: Gained IPv6LL May 15 01:18:47.620953 systemd-networkd[2613]: cali397e3dbc3a2: Gained IPv6LL May 15 01:18:48.038445 containerd[2704]: time="2025-05-15T01:18:48.038401936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:48.038789 containerd[2704]: time="2025-05-15T01:18:48.038483724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 01:18:48.039193 containerd[2704]: time="2025-05-15T01:18:48.039173825Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:48.041047 containerd[2704]: time="2025-05-15T01:18:48.041023041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:48.041742 containerd[2704]: time="2025-05-15T01:18:48.041721981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 827.38909ms" May 15 01:18:48.041776 containerd[2704]: time="2025-05-15T01:18:48.041748777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 01:18:48.042557 containerd[2704]: time="2025-05-15T01:18:48.042537224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 01:18:48.047398 containerd[2704]: time="2025-05-15T01:18:48.047372732Z" level=info msg="CreateContainer within sandbox \"c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 01:18:48.051898 containerd[2704]: time="2025-05-15T01:18:48.051872408Z" level=info msg="CreateContainer within sandbox \"c3cd1e464b46086f586edc84916c76364d76b0d137eec602c4e54d4b53b81644\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a6a758a32a1262008aec53545f4a02543348e68f8121f899ab20b0066809a81c\"" May 15 01:18:48.052245 containerd[2704]: time="2025-05-15T01:18:48.052220638Z" level=info msg="StartContainer for \"a6a758a32a1262008aec53545f4a02543348e68f8121f899ab20b0066809a81c\"" May 15 01:18:48.081027 systemd[1]: Started cri-containerd-a6a758a32a1262008aec53545f4a02543348e68f8121f899ab20b0066809a81c.scope - libcontainer container a6a758a32a1262008aec53545f4a02543348e68f8121f899ab20b0066809a81c. May 15 01:18:48.105621 containerd[2704]: time="2025-05-15T01:18:48.105589041Z" level=info msg="StartContainer for \"a6a758a32a1262008aec53545f4a02543348e68f8121f899ab20b0066809a81c\" returns successfully" May 15 01:18:48.424303 kubelet[4234]: I0515 01:18:48.424214 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:18:48.424303 kubelet[4234]: I0515 01:18:48.424225 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:18:48.434628 kubelet[4234]: I0515 01:18:48.434576 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-df45848cb-96g89" podStartSLOduration=9.493380922 podStartE2EDuration="11.434560803s" podCreationTimestamp="2025-05-15 01:18:37 +0000 UTC" firstStartedPulling="2025-05-15 01:18:46.101227122 +0000 UTC m=+19.838750768" lastFinishedPulling="2025-05-15 01:18:48.042406963 +0000 UTC m=+21.779930649" observedRunningTime="2025-05-15 01:18:48.434319597 +0000 UTC m=+22.171843283" watchObservedRunningTime="2025-05-15 01:18:48.434560803 +0000 UTC m=+22.172084489" May 15 01:18:48.453009 systemd-networkd[2613]: vxlan.calico: Gained IPv6LL May 15 01:18:48.500605 containerd[2704]: time="2025-05-15T01:18:48.500557318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:48.500731 containerd[2704]: time="2025-05-15T01:18:48.500613750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 01:18:48.501346 containerd[2704]: time="2025-05-15T01:18:48.501325088Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:48.503089 containerd[2704]: time="2025-05-15T01:18:48.503070238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 01:18:48.503783 containerd[2704]: time="2025-05-15T01:18:48.503767819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 461.199439ms" May 15 01:18:48.503814 containerd[2704]: time="2025-05-15T01:18:48.503789855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 01:18:48.505365 containerd[2704]: time="2025-05-15T01:18:48.505341193Z" level=info msg="CreateContainer within sandbox \"af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 01:18:48.511003 containerd[2704]: time="2025-05-15T01:18:48.510973068Z" level=info msg="CreateContainer within sandbox \"af1c13ed88fa504f2ea139593dd300b594e41a37fde8cecf83a76d743ef639d5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c8738698a7a9eccec1945723b206315a06e9723c01b4aeef50a360cf43794222\"" May 15 01:18:48.511300 containerd[2704]: time="2025-05-15T01:18:48.511280304Z" level=info msg="StartContainer for \"c8738698a7a9eccec1945723b206315a06e9723c01b4aeef50a360cf43794222\"" May 15 01:18:48.543025 systemd[1]: Started cri-containerd-c8738698a7a9eccec1945723b206315a06e9723c01b4aeef50a360cf43794222.scope - libcontainer container c8738698a7a9eccec1945723b206315a06e9723c01b4aeef50a360cf43794222. May 15 01:18:48.563661 containerd[2704]: time="2025-05-15T01:18:48.563633531Z" level=info msg="StartContainer for \"c8738698a7a9eccec1945723b206315a06e9723c01b4aeef50a360cf43794222\" returns successfully" May 15 01:18:48.898425 kubelet[4234]: I0515 01:18:48.898394 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:18:49.374614 kubelet[4234]: I0515 01:18:49.374592 4234 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 01:18:49.374719 kubelet[4234]: I0515 01:18:49.374621 4234 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 01:18:49.429610 kubelet[4234]: I0515 01:18:49.429585 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:18:49.442794 kubelet[4234]: I0515 01:18:49.441528 4234 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jm9rx" podStartSLOduration=9.61705337 podStartE2EDuration="12.441510683s" podCreationTimestamp="2025-05-15 01:18:37 +0000 UTC" firstStartedPulling="2025-05-15 01:18:45.679900421 +0000 UTC m=+19.417424067" lastFinishedPulling="2025-05-15 01:18:48.504357694 +0000 UTC m=+22.241881380" observedRunningTime="2025-05-15 01:18:49.441026908 +0000 UTC m=+23.178550594" watchObservedRunningTime="2025-05-15 01:18:49.441510683 +0000 UTC m=+23.179034369" May 15 01:18:57.531761 kubelet[4234]: I0515 01:18:57.531709 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:19:26.327958 containerd[2704]: time="2025-05-15T01:19:26.327876949Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:19:26.328323 containerd[2704]: time="2025-05-15T01:19:26.327978709Z" level=info msg="TearDown network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" successfully" May 15 01:19:26.328323 containerd[2704]: time="2025-05-15T01:19:26.327989749Z" level=info msg="StopPodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" returns successfully" May 15 01:19:26.328323 containerd[2704]: time="2025-05-15T01:19:26.328295028Z" level=info msg="RemovePodSandbox for \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:19:26.328323 containerd[2704]: time="2025-05-15T01:19:26.328321588Z" level=info msg="Forcibly stopping sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\"" May 15 01:19:26.328417 containerd[2704]: time="2025-05-15T01:19:26.328386827Z" level=info msg="TearDown network for sandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" successfully" May 15 01:19:26.329870 containerd[2704]: time="2025-05-15T01:19:26.329842501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.329902 containerd[2704]: time="2025-05-15T01:19:26.329891781Z" level=info msg="RemovePodSandbox \"dcf5eca5d95aee9d5dc5d4bddd8d5de1ad906de2d72d1ccd0c2001c0e1a655f6\" returns successfully" May 15 01:19:26.330164 containerd[2704]: time="2025-05-15T01:19:26.330146340Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" May 15 01:19:26.330234 containerd[2704]: time="2025-05-15T01:19:26.330223940Z" level=info msg="TearDown network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" successfully" May 15 01:19:26.330260 containerd[2704]: time="2025-05-15T01:19:26.330234340Z" level=info msg="StopPodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" returns successfully" May 15 01:19:26.330434 containerd[2704]: time="2025-05-15T01:19:26.330419739Z" level=info msg="RemovePodSandbox for \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" May 15 01:19:26.330458 containerd[2704]: time="2025-05-15T01:19:26.330441779Z" level=info msg="Forcibly stopping sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\"" May 15 01:19:26.330519 containerd[2704]: time="2025-05-15T01:19:26.330501059Z" level=info msg="TearDown network for sandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" successfully" May 15 01:19:26.331755 containerd[2704]: time="2025-05-15T01:19:26.331734974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.331781 containerd[2704]: time="2025-05-15T01:19:26.331773853Z" level=info msg="RemovePodSandbox \"d822324158824fd63aa3d14a6fe984b3de237e53284fac0d1347e43c2fbe84c6\" returns successfully" May 15 01:19:26.332006 containerd[2704]: time="2025-05-15T01:19:26.331989613Z" level=info msg="StopPodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\"" May 15 01:19:26.332078 containerd[2704]: time="2025-05-15T01:19:26.332068212Z" level=info msg="TearDown network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" successfully" May 15 01:19:26.332104 containerd[2704]: time="2025-05-15T01:19:26.332078532Z" level=info msg="StopPodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" returns successfully" May 15 01:19:26.332285 containerd[2704]: time="2025-05-15T01:19:26.332267531Z" level=info msg="RemovePodSandbox for \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\"" May 15 01:19:26.332309 containerd[2704]: time="2025-05-15T01:19:26.332291411Z" level=info msg="Forcibly stopping sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\"" May 15 01:19:26.332358 containerd[2704]: time="2025-05-15T01:19:26.332349811Z" level=info msg="TearDown network for sandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" successfully" May 15 01:19:26.333604 containerd[2704]: time="2025-05-15T01:19:26.333582286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.333646 containerd[2704]: time="2025-05-15T01:19:26.333622006Z" level=info msg="RemovePodSandbox \"376504397c346330271321bd4ae846bca8835552ff5a0b8e1904872976b867a8\" returns successfully" May 15 01:19:26.333834 containerd[2704]: time="2025-05-15T01:19:26.333817085Z" level=info msg="StopPodSandbox for \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\"" May 15 01:19:26.333912 containerd[2704]: time="2025-05-15T01:19:26.333901365Z" level=info msg="TearDown network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\" successfully" May 15 01:19:26.333940 containerd[2704]: time="2025-05-15T01:19:26.333912365Z" level=info msg="StopPodSandbox for \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\" returns successfully" May 15 01:19:26.334094 containerd[2704]: time="2025-05-15T01:19:26.334080644Z" level=info msg="RemovePodSandbox for \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\"" May 15 01:19:26.334115 containerd[2704]: time="2025-05-15T01:19:26.334100124Z" level=info msg="Forcibly stopping sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\"" May 15 01:19:26.334169 containerd[2704]: time="2025-05-15T01:19:26.334159404Z" level=info msg="TearDown network for sandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\" successfully" May 15 01:19:26.335377 containerd[2704]: time="2025-05-15T01:19:26.335358039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.335407 containerd[2704]: time="2025-05-15T01:19:26.335398159Z" level=info msg="RemovePodSandbox \"d1da5ba48f7bcc164c36f1ade568780cbbf222fa592f73717f0f61fc7bb01ac1\" returns successfully" May 15 01:19:26.335591 containerd[2704]: time="2025-05-15T01:19:26.335578078Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:19:26.335650 containerd[2704]: time="2025-05-15T01:19:26.335640718Z" level=info msg="TearDown network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" successfully" May 15 01:19:26.335673 containerd[2704]: time="2025-05-15T01:19:26.335650238Z" level=info msg="StopPodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" returns successfully" May 15 01:19:26.335861 containerd[2704]: time="2025-05-15T01:19:26.335844117Z" level=info msg="RemovePodSandbox for \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:19:26.335886 containerd[2704]: time="2025-05-15T01:19:26.335866797Z" level=info msg="Forcibly stopping sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\"" May 15 01:19:26.335932 containerd[2704]: time="2025-05-15T01:19:26.335923036Z" level=info msg="TearDown network for sandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" successfully" May 15 01:19:26.337213 containerd[2704]: time="2025-05-15T01:19:26.337188271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.337243 containerd[2704]: time="2025-05-15T01:19:26.337234431Z" level=info msg="RemovePodSandbox \"3de90b1da12c62bfcecf0d6fda5ad3b7c635d3286c55fa6a13d63e3cf7e390d8\" returns successfully" May 15 01:19:26.337457 containerd[2704]: time="2025-05-15T01:19:26.337436230Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" May 15 01:19:26.337531 containerd[2704]: time="2025-05-15T01:19:26.337517430Z" level=info msg="TearDown network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" successfully" May 15 01:19:26.337556 containerd[2704]: time="2025-05-15T01:19:26.337528950Z" level=info msg="StopPodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" returns successfully" May 15 01:19:26.337732 containerd[2704]: time="2025-05-15T01:19:26.337714469Z" level=info msg="RemovePodSandbox for \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" May 15 01:19:26.337759 containerd[2704]: time="2025-05-15T01:19:26.337734389Z" level=info msg="Forcibly stopping sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\"" May 15 01:19:26.337808 containerd[2704]: time="2025-05-15T01:19:26.337796309Z" level=info msg="TearDown network for sandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" successfully" May 15 01:19:26.339085 containerd[2704]: time="2025-05-15T01:19:26.339061304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.339112 containerd[2704]: time="2025-05-15T01:19:26.339102623Z" level=info msg="RemovePodSandbox \"fd893c20227add24950e674bc45bf057d4d5d49251b2bb2f943251c7b9620d8a\" returns successfully" May 15 01:19:26.339331 containerd[2704]: time="2025-05-15T01:19:26.339315103Z" level=info msg="StopPodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\"" May 15 01:19:26.339393 containerd[2704]: time="2025-05-15T01:19:26.339381302Z" level=info msg="TearDown network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" successfully" May 15 01:19:26.339420 containerd[2704]: time="2025-05-15T01:19:26.339392342Z" level=info msg="StopPodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" returns successfully" May 15 01:19:26.339611 containerd[2704]: time="2025-05-15T01:19:26.339591061Z" level=info msg="RemovePodSandbox for \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\"" May 15 01:19:26.339638 containerd[2704]: time="2025-05-15T01:19:26.339615941Z" level=info msg="Forcibly stopping sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\"" May 15 01:19:26.339686 containerd[2704]: time="2025-05-15T01:19:26.339673981Z" level=info msg="TearDown network for sandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" successfully" May 15 01:19:26.341023 containerd[2704]: time="2025-05-15T01:19:26.340999416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.341050 containerd[2704]: time="2025-05-15T01:19:26.341041935Z" level=info msg="RemovePodSandbox \"4ae61e81d58c75f364e6de8d970e6c809bcebfbc449d7a3953d4e572dc2d7ec0\" returns successfully" May 15 01:19:26.341271 containerd[2704]: time="2025-05-15T01:19:26.341254375Z" level=info msg="StopPodSandbox for \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\"" May 15 01:19:26.341342 containerd[2704]: time="2025-05-15T01:19:26.341328934Z" level=info msg="TearDown network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\" successfully" May 15 01:19:26.341366 containerd[2704]: time="2025-05-15T01:19:26.341340774Z" level=info msg="StopPodSandbox for \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\" returns successfully" May 15 01:19:26.341523 containerd[2704]: time="2025-05-15T01:19:26.341507614Z" level=info msg="RemovePodSandbox for \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\"" May 15 01:19:26.341548 containerd[2704]: time="2025-05-15T01:19:26.341526853Z" level=info msg="Forcibly stopping sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\"" May 15 01:19:26.341589 containerd[2704]: time="2025-05-15T01:19:26.341577573Z" level=info msg="TearDown network for sandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\" successfully" May 15 01:19:26.342877 containerd[2704]: time="2025-05-15T01:19:26.342843048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.342931 containerd[2704]: time="2025-05-15T01:19:26.342894088Z" level=info msg="RemovePodSandbox \"9c895872761246d6df1ad1ad881486fc376d93342e4482fb67350ba5329c634e\" returns successfully" May 15 01:19:26.343115 containerd[2704]: time="2025-05-15T01:19:26.343099247Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:19:26.343193 containerd[2704]: time="2025-05-15T01:19:26.343180567Z" level=info msg="TearDown network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" successfully" May 15 01:19:26.343219 containerd[2704]: time="2025-05-15T01:19:26.343190807Z" level=info msg="StopPodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" returns successfully" May 15 01:19:26.343378 containerd[2704]: time="2025-05-15T01:19:26.343361726Z" level=info msg="RemovePodSandbox for \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:19:26.343400 containerd[2704]: time="2025-05-15T01:19:26.343383646Z" level=info msg="Forcibly stopping sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\"" May 15 01:19:26.343462 containerd[2704]: time="2025-05-15T01:19:26.343450526Z" level=info msg="TearDown network for sandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" successfully" May 15 01:19:26.344683 containerd[2704]: time="2025-05-15T01:19:26.344660881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.344713 containerd[2704]: time="2025-05-15T01:19:26.344701640Z" level=info msg="RemovePodSandbox \"cd002def619364a9adfeab749a955b997e2305177f37d7725dd2f9d25893c77c\" returns successfully" May 15 01:19:26.344902 containerd[2704]: time="2025-05-15T01:19:26.344886080Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" May 15 01:19:26.344963 containerd[2704]: time="2025-05-15T01:19:26.344950599Z" level=info msg="TearDown network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" successfully" May 15 01:19:26.344987 containerd[2704]: time="2025-05-15T01:19:26.344960599Z" level=info msg="StopPodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" returns successfully" May 15 01:19:26.345166 containerd[2704]: time="2025-05-15T01:19:26.345150359Z" level=info msg="RemovePodSandbox for \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" May 15 01:19:26.345188 containerd[2704]: time="2025-05-15T01:19:26.345169879Z" level=info msg="Forcibly stopping sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\"" May 15 01:19:26.345243 containerd[2704]: time="2025-05-15T01:19:26.345232518Z" level=info msg="TearDown network for sandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" successfully" May 15 01:19:26.346452 containerd[2704]: time="2025-05-15T01:19:26.346429353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.346500 containerd[2704]: time="2025-05-15T01:19:26.346472393Z" level=info msg="RemovePodSandbox \"86041c9c57f1c9ae0073842e384a9d60f992fec1c101b92c36740147caffca09\" returns successfully" May 15 01:19:26.346668 containerd[2704]: time="2025-05-15T01:19:26.346654152Z" level=info msg="StopPodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\"" May 15 01:19:26.346731 containerd[2704]: time="2025-05-15T01:19:26.346720432Z" level=info msg="TearDown network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" successfully" May 15 01:19:26.346754 containerd[2704]: time="2025-05-15T01:19:26.346730392Z" level=info msg="StopPodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" returns successfully" May 15 01:19:26.346936 containerd[2704]: time="2025-05-15T01:19:26.346918471Z" level=info msg="RemovePodSandbox for \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\"" May 15 01:19:26.346960 containerd[2704]: time="2025-05-15T01:19:26.346942991Z" level=info msg="Forcibly stopping sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\"" May 15 01:19:26.347012 containerd[2704]: time="2025-05-15T01:19:26.347002231Z" level=info msg="TearDown network for sandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" successfully" May 15 01:19:26.348288 containerd[2704]: time="2025-05-15T01:19:26.348266906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.348317 containerd[2704]: time="2025-05-15T01:19:26.348309266Z" level=info msg="RemovePodSandbox \"eda80e744c40d38f546229e7e655ae94142df9d044085a7ef75f307e9038bc61\" returns successfully" May 15 01:19:26.348520 containerd[2704]: time="2025-05-15T01:19:26.348505465Z" level=info msg="StopPodSandbox for \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\"" May 15 01:19:26.348596 containerd[2704]: time="2025-05-15T01:19:26.348584865Z" level=info msg="TearDown network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\" successfully" May 15 01:19:26.348619 containerd[2704]: time="2025-05-15T01:19:26.348596345Z" level=info msg="StopPodSandbox for \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\" returns successfully" May 15 01:19:26.348785 containerd[2704]: time="2025-05-15T01:19:26.348770984Z" level=info msg="RemovePodSandbox for \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\"" May 15 01:19:26.348808 containerd[2704]: time="2025-05-15T01:19:26.348790184Z" level=info msg="Forcibly stopping sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\"" May 15 01:19:26.348873 containerd[2704]: time="2025-05-15T01:19:26.348852023Z" level=info msg="TearDown network for sandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\" successfully" May 15 01:19:26.350090 containerd[2704]: time="2025-05-15T01:19:26.350066219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.350120 containerd[2704]: time="2025-05-15T01:19:26.350112698Z" level=info msg="RemovePodSandbox \"e2d4d5be71399db0d94843ef0bd58dcafe8ce61a896adec73f42ff2905ac4b61\" returns successfully" May 15 01:19:26.350330 containerd[2704]: time="2025-05-15T01:19:26.350313258Z" level=info msg="StopPodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\"" May 15 01:19:26.350400 containerd[2704]: time="2025-05-15T01:19:26.350387097Z" level=info msg="TearDown network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" successfully" May 15 01:19:26.350424 containerd[2704]: time="2025-05-15T01:19:26.350397977Z" level=info msg="StopPodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" returns successfully" May 15 01:19:26.350593 containerd[2704]: time="2025-05-15T01:19:26.350576656Z" level=info msg="RemovePodSandbox for \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\"" May 15 01:19:26.350622 containerd[2704]: time="2025-05-15T01:19:26.350596336Z" level=info msg="Forcibly stopping sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\"" May 15 01:19:26.350661 containerd[2704]: time="2025-05-15T01:19:26.350649736Z" level=info msg="TearDown network for sandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" successfully" May 15 01:19:26.351923 containerd[2704]: time="2025-05-15T01:19:26.351898611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.351956 containerd[2704]: time="2025-05-15T01:19:26.351946091Z" level=info msg="RemovePodSandbox \"29833505b57c519a7c724de6e6629ab92bd883ebbb4d7e141dcde27339807000\" returns successfully" May 15 01:19:26.353695 containerd[2704]: time="2025-05-15T01:19:26.353668004Z" level=info msg="StopPodSandbox for \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\"" May 15 01:19:26.353961 containerd[2704]: time="2025-05-15T01:19:26.353764963Z" level=info msg="TearDown network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\" successfully" May 15 01:19:26.353961 containerd[2704]: time="2025-05-15T01:19:26.353776203Z" level=info msg="StopPodSandbox for \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\" returns successfully" May 15 01:19:26.354250 containerd[2704]: time="2025-05-15T01:19:26.354228841Z" level=info msg="RemovePodSandbox for \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\"" May 15 01:19:26.354282 containerd[2704]: time="2025-05-15T01:19:26.354256081Z" level=info msg="Forcibly stopping sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\"" May 15 01:19:26.354335 containerd[2704]: time="2025-05-15T01:19:26.354323841Z" level=info msg="TearDown network for sandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\" successfully" May 15 01:19:26.355601 containerd[2704]: time="2025-05-15T01:19:26.355578396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.355636 containerd[2704]: time="2025-05-15T01:19:26.355623836Z" level=info msg="RemovePodSandbox \"83b1138525887281825b12eea65626bbe77011d8ad57d1d0fccac05f6d791ca3\" returns successfully" May 15 01:19:26.355923 containerd[2704]: time="2025-05-15T01:19:26.355904475Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:19:26.355995 containerd[2704]: time="2025-05-15T01:19:26.355984954Z" level=info msg="TearDown network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" successfully" May 15 01:19:26.356017 containerd[2704]: time="2025-05-15T01:19:26.355995554Z" level=info msg="StopPodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" returns successfully" May 15 01:19:26.356230 containerd[2704]: time="2025-05-15T01:19:26.356212233Z" level=info msg="RemovePodSandbox for \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:19:26.356253 containerd[2704]: time="2025-05-15T01:19:26.356234953Z" level=info msg="Forcibly stopping sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\"" May 15 01:19:26.356306 containerd[2704]: time="2025-05-15T01:19:26.356296473Z" level=info msg="TearDown network for sandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" successfully" May 15 01:19:26.357576 containerd[2704]: time="2025-05-15T01:19:26.357547628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.357633 containerd[2704]: time="2025-05-15T01:19:26.357591388Z" level=info msg="RemovePodSandbox \"4cb43128047a29ab4f1a182dfd6e45fe186defd80ddee2bdbd0588d5c0f4476c\" returns successfully" May 15 01:19:26.357820 containerd[2704]: time="2025-05-15T01:19:26.357794827Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" May 15 01:19:26.357899 containerd[2704]: time="2025-05-15T01:19:26.357886226Z" level=info msg="TearDown network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" successfully" May 15 01:19:26.357899 containerd[2704]: time="2025-05-15T01:19:26.357896826Z" level=info msg="StopPodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" returns successfully" May 15 01:19:26.358116 containerd[2704]: time="2025-05-15T01:19:26.358100066Z" level=info msg="RemovePodSandbox for \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" May 15 01:19:26.358146 containerd[2704]: time="2025-05-15T01:19:26.358118826Z" level=info msg="Forcibly stopping sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\"" May 15 01:19:26.358183 containerd[2704]: time="2025-05-15T01:19:26.358171385Z" level=info msg="TearDown network for sandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" successfully" May 15 01:19:26.359458 containerd[2704]: time="2025-05-15T01:19:26.359433660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.359492 containerd[2704]: time="2025-05-15T01:19:26.359481180Z" level=info msg="RemovePodSandbox \"1b7136f1af73062d3727dec01fb1d5bbf174c023942c975d9a4cb976c6b998c0\" returns successfully" May 15 01:19:26.359694 containerd[2704]: time="2025-05-15T01:19:26.359679019Z" level=info msg="StopPodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\"" May 15 01:19:26.359765 containerd[2704]: time="2025-05-15T01:19:26.359755179Z" level=info msg="TearDown network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" successfully" May 15 01:19:26.359831 containerd[2704]: time="2025-05-15T01:19:26.359765219Z" level=info msg="StopPodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" returns successfully" May 15 01:19:26.359965 containerd[2704]: time="2025-05-15T01:19:26.359950658Z" level=info msg="RemovePodSandbox for \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\"" May 15 01:19:26.359991 containerd[2704]: time="2025-05-15T01:19:26.359970778Z" level=info msg="Forcibly stopping sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\"" May 15 01:19:26.360046 containerd[2704]: time="2025-05-15T01:19:26.360035698Z" level=info msg="TearDown network for sandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" successfully" May 15 01:19:26.361289 containerd[2704]: time="2025-05-15T01:19:26.361266053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.361337 containerd[2704]: time="2025-05-15T01:19:26.361308732Z" level=info msg="RemovePodSandbox \"7cf74239a897d16a30cee9e8b0dcb2d9eca53811848690ab49650ca1254e15e8\" returns successfully" May 15 01:19:26.361519 containerd[2704]: time="2025-05-15T01:19:26.361505452Z" level=info msg="StopPodSandbox for \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\"" May 15 01:19:26.361577 containerd[2704]: time="2025-05-15T01:19:26.361565811Z" level=info msg="TearDown network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\" successfully" May 15 01:19:26.361601 containerd[2704]: time="2025-05-15T01:19:26.361576851Z" level=info msg="StopPodSandbox for \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\" returns successfully" May 15 01:19:26.361757 containerd[2704]: time="2025-05-15T01:19:26.361742411Z" level=info msg="RemovePodSandbox for \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\"" May 15 01:19:26.361778 containerd[2704]: time="2025-05-15T01:19:26.361762771Z" level=info msg="Forcibly stopping sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\"" May 15 01:19:26.361850 containerd[2704]: time="2025-05-15T01:19:26.361840730Z" level=info msg="TearDown network for sandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\" successfully" May 15 01:19:26.363083 containerd[2704]: time="2025-05-15T01:19:26.363057165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.363125 containerd[2704]: time="2025-05-15T01:19:26.363101685Z" level=info msg="RemovePodSandbox \"6e4419dbdd5ed3430952bcb8e33331b98726584a79cc9c3d9147e36fd4b2a80b\" returns successfully" May 15 01:19:26.363355 containerd[2704]: time="2025-05-15T01:19:26.363335164Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:19:26.363443 containerd[2704]: time="2025-05-15T01:19:26.363409044Z" level=info msg="TearDown network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" successfully" May 15 01:19:26.363443 containerd[2704]: time="2025-05-15T01:19:26.363419324Z" level=info msg="StopPodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" returns successfully" May 15 01:19:26.363634 containerd[2704]: time="2025-05-15T01:19:26.363617603Z" level=info msg="RemovePodSandbox for \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:19:26.363668 containerd[2704]: time="2025-05-15T01:19:26.363637763Z" level=info msg="Forcibly stopping sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\"" May 15 01:19:26.363723 containerd[2704]: time="2025-05-15T01:19:26.363711403Z" level=info msg="TearDown network for sandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" successfully" May 15 01:19:26.364952 containerd[2704]: time="2025-05-15T01:19:26.364930438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.364988 containerd[2704]: time="2025-05-15T01:19:26.364973917Z" level=info msg="RemovePodSandbox \"478d9be7b51d46a37d8efa62383d66634c7798cd94382be16f7365eb59ba1152\" returns successfully" May 15 01:19:26.365227 containerd[2704]: time="2025-05-15T01:19:26.365209117Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" May 15 01:19:26.365289 containerd[2704]: time="2025-05-15T01:19:26.365278076Z" level=info msg="TearDown network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" successfully" May 15 01:19:26.365528 containerd[2704]: time="2025-05-15T01:19:26.365287316Z" level=info msg="StopPodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" returns successfully" May 15 01:19:26.367074 containerd[2704]: time="2025-05-15T01:19:26.367030749Z" level=info msg="RemovePodSandbox for \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" May 15 01:19:26.367136 containerd[2704]: time="2025-05-15T01:19:26.367109909Z" level=info msg="Forcibly stopping sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\"" May 15 01:19:26.367556 containerd[2704]: time="2025-05-15T01:19:26.367528307Z" level=info msg="TearDown network for sandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" successfully" May 15 01:19:26.368895 containerd[2704]: time="2025-05-15T01:19:26.368869622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.368921 containerd[2704]: time="2025-05-15T01:19:26.368912181Z" level=info msg="RemovePodSandbox \"a9e6434c2b25d22a1542c0a70f2b3b70179b76ed225e1e0c02ac6abd2ce9c5ed\" returns successfully" May 15 01:19:26.369110 containerd[2704]: time="2025-05-15T01:19:26.369088901Z" level=info msg="StopPodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\"" May 15 01:19:26.369189 containerd[2704]: time="2025-05-15T01:19:26.369176620Z" level=info msg="TearDown network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" successfully" May 15 01:19:26.369212 containerd[2704]: time="2025-05-15T01:19:26.369188780Z" level=info msg="StopPodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" returns successfully" May 15 01:19:26.369393 containerd[2704]: time="2025-05-15T01:19:26.369372739Z" level=info msg="RemovePodSandbox for \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\"" May 15 01:19:26.369415 containerd[2704]: time="2025-05-15T01:19:26.369398139Z" level=info msg="Forcibly stopping sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\"" May 15 01:19:26.369475 containerd[2704]: time="2025-05-15T01:19:26.369464499Z" level=info msg="TearDown network for sandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" successfully" May 15 01:19:26.370745 containerd[2704]: time="2025-05-15T01:19:26.370720534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.370797 containerd[2704]: time="2025-05-15T01:19:26.370765414Z" level=info msg="RemovePodSandbox \"86bd9c3f07e443de1ad10a861253556c952afa92851a7efaac105a5aa1f89a6c\" returns successfully" May 15 01:19:26.371043 containerd[2704]: time="2025-05-15T01:19:26.371024933Z" level=info msg="StopPodSandbox for \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\"" May 15 01:19:26.371118 containerd[2704]: time="2025-05-15T01:19:26.371107572Z" level=info msg="TearDown network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\" successfully" May 15 01:19:26.371140 containerd[2704]: time="2025-05-15T01:19:26.371118212Z" level=info msg="StopPodSandbox for \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\" returns successfully" May 15 01:19:26.371317 containerd[2704]: time="2025-05-15T01:19:26.371298292Z" level=info msg="RemovePodSandbox for \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\"" May 15 01:19:26.371338 containerd[2704]: time="2025-05-15T01:19:26.371323251Z" level=info msg="Forcibly stopping sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\"" May 15 01:19:26.371394 containerd[2704]: time="2025-05-15T01:19:26.371384211Z" level=info msg="TearDown network for sandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\" successfully" May 15 01:19:26.372608 containerd[2704]: time="2025-05-15T01:19:26.372587166Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 01:19:26.372636 containerd[2704]: time="2025-05-15T01:19:26.372628326Z" level=info msg="RemovePodSandbox \"2758672536809cfad167b5c3c992b000f184de471068acd0c5ad7e595d84ccbf\" returns successfully" May 15 01:19:28.870025 kubelet[4234]: I0515 01:19:28.869971 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:19:40.541196 kubelet[4234]: I0515 01:19:40.541157 4234 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:23:36.751590 update_engine[2696]: I20250515 01:23:36.750952 2696 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 01:23:36.751590 update_engine[2696]: I20250515 01:23:36.751013 2696 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 01:23:36.751590 update_engine[2696]: I20250515 01:23:36.751256 2696 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 01:23:36.751590 update_engine[2696]: I20250515 01:23:36.751574 2696 omaha_request_params.cc:62] Current group set to beta May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751649 2696 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751657 2696 update_attempter.cc:643] Scheduling an action processor start. May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751672 2696 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751699 2696 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751745 2696 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751752 2696 omaha_request_action.cc:272] Request: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: May 15 01:23:36.752271 update_engine[2696]: I20250515 01:23:36.751758 2696 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:23:36.752551 locksmithd[2730]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 01:23:36.752786 update_engine[2696]: I20250515 01:23:36.752766 2696 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:23:36.753151 update_engine[2696]: I20250515 01:23:36.753115 2696 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:23:36.753527 update_engine[2696]: E20250515 01:23:36.753511 2696 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:23:36.753573 update_engine[2696]: I20250515 01:23:36.753561 2696 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 01:23:46.752668 update_engine[2696]: I20250515 01:23:46.752604 2696 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:23:46.753114 update_engine[2696]: I20250515 01:23:46.752895 2696 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:23:46.753114 update_engine[2696]: I20250515 01:23:46.753103 2696 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:23:46.753491 update_engine[2696]: E20250515 01:23:46.753472 2696 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:23:46.753533 update_engine[2696]: I20250515 01:23:46.753520 2696 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 01:23:56.752673 update_engine[2696]: I20250515 01:23:56.752565 2696 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:23:56.753072 update_engine[2696]: I20250515 01:23:56.752832 2696 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:23:56.753072 update_engine[2696]: I20250515 01:23:56.753060 2696 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:23:56.753492 update_engine[2696]: E20250515 01:23:56.753475 2696 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:23:56.753531 update_engine[2696]: I20250515 01:23:56.753520 2696 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 01:24:06.752634 update_engine[2696]: I20250515 01:24:06.752559 2696 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:24:06.753130 update_engine[2696]: I20250515 01:24:06.752851 2696 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:24:06.753130 update_engine[2696]: I20250515 01:24:06.753086 2696 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:24:06.753449 update_engine[2696]: E20250515 01:24:06.753430 2696 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:24:06.753490 update_engine[2696]: I20250515 01:24:06.753477 2696 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 01:24:06.753515 update_engine[2696]: I20250515 01:24:06.753489 2696 omaha_request_action.cc:617] Omaha request response: May 15 01:24:06.753570 update_engine[2696]: E20250515 01:24:06.753558 2696 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 01:24:06.753595 update_engine[2696]: I20250515 01:24:06.753575 2696 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 01:24:06.753595 update_engine[2696]: I20250515 01:24:06.753582 2696 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 01:24:06.753595 update_engine[2696]: I20250515 01:24:06.753585 2696 update_attempter.cc:306] Processing Done. May 15 01:24:06.753655 update_engine[2696]: E20250515 01:24:06.753599 2696 update_attempter.cc:619] Update failed. May 15 01:24:06.753655 update_engine[2696]: I20250515 01:24:06.753604 2696 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 01:24:06.753655 update_engine[2696]: I20250515 01:24:06.753609 2696 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 01:24:06.753655 update_engine[2696]: I20250515 01:24:06.753613 2696 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 01:24:06.753735 update_engine[2696]: I20250515 01:24:06.753670 2696 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 01:24:06.753735 update_engine[2696]: I20250515 01:24:06.753690 2696 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 01:24:06.753735 update_engine[2696]: I20250515 01:24:06.753695 2696 omaha_request_action.cc:272] Request: May 15 01:24:06.753735 update_engine[2696]: May 15 01:24:06.753735 update_engine[2696]: May 15 01:24:06.753735 update_engine[2696]: May 15 01:24:06.753735 update_engine[2696]: May 15 01:24:06.753735 update_engine[2696]: May 15 01:24:06.753735 update_engine[2696]: May 15 01:24:06.753735 update_engine[2696]: I20250515 01:24:06.753700 2696 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 01:24:06.753922 update_engine[2696]: I20250515 01:24:06.753810 2696 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 01:24:06.753945 locksmithd[2730]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 01:24:06.754124 update_engine[2696]: I20250515 01:24:06.753973 2696 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 01:24:06.754537 update_engine[2696]: E20250515 01:24:06.754521 2696 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 01:24:06.754571 update_engine[2696]: I20250515 01:24:06.754559 2696 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 01:24:06.754593 update_engine[2696]: I20250515 01:24:06.754568 2696 omaha_request_action.cc:617] Omaha request response: May 15 01:24:06.754593 update_engine[2696]: I20250515 01:24:06.754575 2696 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 01:24:06.754593 update_engine[2696]: I20250515 01:24:06.754578 2696 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 01:24:06.754593 update_engine[2696]: I20250515 01:24:06.754582 2696 update_attempter.cc:306] Processing Done. May 15 01:24:06.754593 update_engine[2696]: I20250515 01:24:06.754587 2696 update_attempter.cc:310] Error event sent. May 15 01:24:06.754690 update_engine[2696]: I20250515 01:24:06.754594 2696 update_check_scheduler.cc:74] Next update check in 44m47s May 15 01:24:06.754731 locksmithd[2730]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 01:26:42.206115 systemd[1]: Started sshd@8-147.28.151.170:22-139.178.68.195:35884.service - OpenSSH per-connection server daemon (139.178.68.195:35884). May 15 01:26:42.624497 sshd[9434]: Accepted publickey for core from 139.178.68.195 port 35884 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:42.625663 sshd-session[9434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:42.629271 systemd-logind[2688]: New session 10 of user core. May 15 01:26:42.646007 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 01:26:42.985115 sshd[9436]: Connection closed by 139.178.68.195 port 35884 May 15 01:26:42.985588 sshd-session[9434]: pam_unix(sshd:session): session closed for user core May 15 01:26:42.989126 systemd[1]: sshd@8-147.28.151.170:22-139.178.68.195:35884.service: Deactivated successfully. May 15 01:26:42.991526 systemd[1]: session-10.scope: Deactivated successfully. May 15 01:26:42.992112 systemd-logind[2688]: Session 10 logged out. Waiting for processes to exit. May 15 01:26:42.992777 systemd-logind[2688]: Removed session 10. May 15 01:26:48.058751 systemd[1]: Started sshd@9-147.28.151.170:22-139.178.68.195:60208.service - OpenSSH per-connection server daemon (139.178.68.195:60208). May 15 01:26:48.479628 sshd[9487]: Accepted publickey for core from 139.178.68.195 port 60208 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:48.480643 sshd-session[9487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:48.483915 systemd-logind[2688]: New session 11 of user core. May 15 01:26:48.492017 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 01:26:48.831979 sshd[9489]: Connection closed by 139.178.68.195 port 60208 May 15 01:26:48.832267 sshd-session[9487]: pam_unix(sshd:session): session closed for user core May 15 01:26:48.835114 systemd[1]: sshd@9-147.28.151.170:22-139.178.68.195:60208.service: Deactivated successfully. May 15 01:26:48.837389 systemd[1]: session-11.scope: Deactivated successfully. May 15 01:26:48.838022 systemd-logind[2688]: Session 11 logged out. Waiting for processes to exit. May 15 01:26:48.838639 systemd-logind[2688]: Removed session 11. May 15 01:26:48.904900 systemd[1]: Started sshd@10-147.28.151.170:22-139.178.68.195:60216.service - OpenSSH per-connection server daemon (139.178.68.195:60216). May 15 01:26:49.327636 sshd[9525]: Accepted publickey for core from 139.178.68.195 port 60216 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:49.328767 sshd-session[9525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:49.331973 systemd-logind[2688]: New session 12 of user core. May 15 01:26:49.343958 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 01:26:49.703657 sshd[9554]: Connection closed by 139.178.68.195 port 60216 May 15 01:26:49.703947 sshd-session[9525]: pam_unix(sshd:session): session closed for user core May 15 01:26:49.706816 systemd[1]: sshd@10-147.28.151.170:22-139.178.68.195:60216.service: Deactivated successfully. May 15 01:26:49.709352 systemd[1]: session-12.scope: Deactivated successfully. May 15 01:26:49.710240 systemd-logind[2688]: Session 12 logged out. Waiting for processes to exit. May 15 01:26:49.711022 systemd-logind[2688]: Removed session 12. May 15 01:26:49.783517 systemd[1]: Started sshd@11-147.28.151.170:22-139.178.68.195:60230.service - OpenSSH per-connection server daemon (139.178.68.195:60230). May 15 01:26:50.206785 sshd[9588]: Accepted publickey for core from 139.178.68.195 port 60230 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:50.207792 sshd-session[9588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:50.210944 systemd-logind[2688]: New session 13 of user core. May 15 01:26:50.229006 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 01:26:50.558504 sshd[9590]: Connection closed by 139.178.68.195 port 60230 May 15 01:26:50.558893 sshd-session[9588]: pam_unix(sshd:session): session closed for user core May 15 01:26:50.561761 systemd[1]: sshd@11-147.28.151.170:22-139.178.68.195:60230.service: Deactivated successfully. May 15 01:26:50.563423 systemd[1]: session-13.scope: Deactivated successfully. May 15 01:26:50.564001 systemd-logind[2688]: Session 13 logged out. Waiting for processes to exit. May 15 01:26:50.564586 systemd-logind[2688]: Removed session 13. May 15 01:26:55.633789 systemd[1]: Started sshd@12-147.28.151.170:22-139.178.68.195:50162.service - OpenSSH per-connection server daemon (139.178.68.195:50162). May 15 01:26:56.056710 sshd[9622]: Accepted publickey for core from 139.178.68.195 port 50162 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:56.058048 sshd-session[9622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:56.061171 systemd-logind[2688]: New session 14 of user core. May 15 01:26:56.076953 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 01:26:56.411991 sshd[9624]: Connection closed by 139.178.68.195 port 50162 May 15 01:26:56.412291 sshd-session[9622]: pam_unix(sshd:session): session closed for user core May 15 01:26:56.415035 systemd[1]: sshd@12-147.28.151.170:22-139.178.68.195:50162.service: Deactivated successfully. May 15 01:26:56.416693 systemd[1]: session-14.scope: Deactivated successfully. May 15 01:26:56.417256 systemd-logind[2688]: Session 14 logged out. Waiting for processes to exit. May 15 01:26:56.417784 systemd-logind[2688]: Removed session 14. May 15 01:26:56.487772 systemd[1]: Started sshd@13-147.28.151.170:22-139.178.68.195:50170.service - OpenSSH per-connection server daemon (139.178.68.195:50170). May 15 01:26:56.907812 sshd[9664]: Accepted publickey for core from 139.178.68.195 port 50170 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:56.908891 sshd-session[9664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:56.912009 systemd-logind[2688]: New session 15 of user core. May 15 01:26:56.924017 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 01:26:57.283540 sshd[9668]: Connection closed by 139.178.68.195 port 50170 May 15 01:26:57.283920 sshd-session[9664]: pam_unix(sshd:session): session closed for user core May 15 01:26:57.286613 systemd[1]: sshd@13-147.28.151.170:22-139.178.68.195:50170.service: Deactivated successfully. May 15 01:26:57.289125 systemd[1]: session-15.scope: Deactivated successfully. May 15 01:26:57.289769 systemd-logind[2688]: Session 15 logged out. Waiting for processes to exit. May 15 01:26:57.290348 systemd-logind[2688]: Removed session 15. May 15 01:26:57.352831 systemd[1]: Started sshd@14-147.28.151.170:22-139.178.68.195:50176.service - OpenSSH per-connection server daemon (139.178.68.195:50176). May 15 01:26:57.766834 sshd[9699]: Accepted publickey for core from 139.178.68.195 port 50176 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:57.768040 sshd-session[9699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:57.771252 systemd-logind[2688]: New session 16 of user core. May 15 01:26:57.782958 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 01:26:58.441716 sshd[9720]: Connection closed by 139.178.68.195 port 50176 May 15 01:26:58.442115 sshd-session[9699]: pam_unix(sshd:session): session closed for user core May 15 01:26:58.445153 systemd[1]: sshd@14-147.28.151.170:22-139.178.68.195:50176.service: Deactivated successfully. May 15 01:26:58.447528 systemd[1]: session-16.scope: Deactivated successfully. May 15 01:26:58.448150 systemd-logind[2688]: Session 16 logged out. Waiting for processes to exit. May 15 01:26:58.448887 systemd-logind[2688]: Removed session 16. May 15 01:26:58.521829 systemd[1]: Started sshd@15-147.28.151.170:22-139.178.68.195:50192.service - OpenSSH per-connection server daemon (139.178.68.195:50192). May 15 01:26:58.952605 sshd[9777]: Accepted publickey for core from 139.178.68.195 port 50192 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:58.953732 sshd-session[9777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:58.957063 systemd-logind[2688]: New session 17 of user core. May 15 01:26:58.969008 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 01:26:59.403329 sshd[9779]: Connection closed by 139.178.68.195 port 50192 May 15 01:26:59.403718 sshd-session[9777]: pam_unix(sshd:session): session closed for user core May 15 01:26:59.406664 systemd[1]: sshd@15-147.28.151.170:22-139.178.68.195:50192.service: Deactivated successfully. May 15 01:26:59.408390 systemd[1]: session-17.scope: Deactivated successfully. May 15 01:26:59.409008 systemd-logind[2688]: Session 17 logged out. Waiting for processes to exit. May 15 01:26:59.409609 systemd-logind[2688]: Removed session 17. May 15 01:26:59.478909 systemd[1]: Started sshd@16-147.28.151.170:22-139.178.68.195:50198.service - OpenSSH per-connection server daemon (139.178.68.195:50198). May 15 01:26:59.891757 sshd[9833]: Accepted publickey for core from 139.178.68.195 port 50198 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:26:59.892756 sshd-session[9833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:26:59.895850 systemd-logind[2688]: New session 18 of user core. May 15 01:26:59.907951 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 01:27:00.236239 sshd[9835]: Connection closed by 139.178.68.195 port 50198 May 15 01:27:00.236619 sshd-session[9833]: pam_unix(sshd:session): session closed for user core May 15 01:27:00.239520 systemd[1]: sshd@16-147.28.151.170:22-139.178.68.195:50198.service: Deactivated successfully. May 15 01:27:00.241255 systemd[1]: session-18.scope: Deactivated successfully. May 15 01:27:00.241808 systemd-logind[2688]: Session 18 logged out. Waiting for processes to exit. May 15 01:27:00.242364 systemd-logind[2688]: Removed session 18. May 15 01:27:05.309000 systemd[1]: Started sshd@17-147.28.151.170:22-139.178.68.195:40352.service - OpenSSH per-connection server daemon (139.178.68.195:40352). May 15 01:27:05.722513 sshd[9875]: Accepted publickey for core from 139.178.68.195 port 40352 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:27:05.723537 sshd-session[9875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:27:05.726651 systemd-logind[2688]: New session 19 of user core. May 15 01:27:05.735957 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 01:27:06.069243 sshd[9877]: Connection closed by 139.178.68.195 port 40352 May 15 01:27:06.069525 sshd-session[9875]: pam_unix(sshd:session): session closed for user core May 15 01:27:06.072322 systemd[1]: sshd@17-147.28.151.170:22-139.178.68.195:40352.service: Deactivated successfully. May 15 01:27:06.074056 systemd[1]: session-19.scope: Deactivated successfully. May 15 01:27:06.074607 systemd-logind[2688]: Session 19 logged out. Waiting for processes to exit. May 15 01:27:06.075149 systemd-logind[2688]: Removed session 19. May 15 01:27:11.143972 systemd[1]: Started sshd@18-147.28.151.170:22-139.178.68.195:40356.service - OpenSSH per-connection server daemon (139.178.68.195:40356). May 15 01:27:11.567252 sshd[9916]: Accepted publickey for core from 139.178.68.195 port 40356 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:27:11.568395 sshd-session[9916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:27:11.571828 systemd-logind[2688]: New session 20 of user core. May 15 01:27:11.590966 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 01:27:11.919612 sshd[9918]: Connection closed by 139.178.68.195 port 40356 May 15 01:27:11.919948 sshd-session[9916]: pam_unix(sshd:session): session closed for user core May 15 01:27:11.922806 systemd[1]: sshd@18-147.28.151.170:22-139.178.68.195:40356.service: Deactivated successfully. May 15 01:27:11.925027 systemd[1]: session-20.scope: Deactivated successfully. May 15 01:27:11.925579 systemd-logind[2688]: Session 20 logged out. Waiting for processes to exit. May 15 01:27:11.926150 systemd-logind[2688]: Removed session 20. May 15 01:27:16.995817 systemd[1]: Started sshd@19-147.28.151.170:22-139.178.68.195:36342.service - OpenSSH per-connection server daemon (139.178.68.195:36342). May 15 01:27:17.418581 sshd[9955]: Accepted publickey for core from 139.178.68.195 port 36342 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 01:27:17.419551 sshd-session[9955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 01:27:17.422631 systemd-logind[2688]: New session 21 of user core. May 15 01:27:17.431954 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 01:27:17.770125 sshd[9957]: Connection closed by 139.178.68.195 port 36342 May 15 01:27:17.770495 sshd-session[9955]: pam_unix(sshd:session): session closed for user core May 15 01:27:17.773361 systemd[1]: sshd@19-147.28.151.170:22-139.178.68.195:36342.service: Deactivated successfully. May 15 01:27:17.775067 systemd[1]: session-21.scope: Deactivated successfully. May 15 01:27:17.775619 systemd-logind[2688]: Session 21 logged out. Waiting for processes to exit. May 15 01:27:17.776189 systemd-logind[2688]: Removed session 21.